How to configure Cygnus to save in mysql - fiware

I'm trying to configure Cygnus in order to persist Orion context data in a MySQL database. I have installed phpmyadmin, and I'm trying to use this database to save the data. The whole workflow is the following one: Orion recives some data, then it is sent to Cygnus, and finally Cygnus sends it to the SQL db.
This is my configuration:
# OrionMySQLSink configuration
# channel name from where to read notification events
cygnusagent.sinks.mysql-sink.channel = mysql-channel
# sink class, must not be changed
cygnusagent.sinks.mysql-sink.type = com.telefonica.iot.cygnus.sinks.OrionMySQLSink
# the FQDN/IP address where the MySQL server runs
cygnusagent.sinks.mysql-sink.mysql_host = x.y.z.w
# the port where the MySQL server listens for incomming connections
cygnusagent.sinks.mysql-sink.mysql_port = 3306
# a valid user in the MySQL server
cygnusagent.sinks.mysql-sink.mysql_username = root
# password for the user above
cygnusagent.sinks.mysql-sink.mysql_password = xxxxxxxxxxxx
# how the attributes are stored, either per row either per column (row, column)
cygnusagent.sinks.mysql-sink.attr_persistence = column

A correct configuration file look like this:
# channel name from where to read notification events
cygnusagent.sinks.mysql-sink.channel = mysql-channel
# sink class, must not be changed
cygnusagent.sinks.mysql-sink.type = com.telefonica.iot.cygnus.sinks.OrionMySQLSink
# the FQDN/IP address where the MySQL server runs
cygnusagent.sinks.mysql-sink.mysql_host = localhost
# the port where the MySQL server listes for incomming connections
cygnusagent.sinks.mysql-sink.mysql_port = 3306
# a valid user in the MySQL server
cygnusagent.sinks.mysql-sink.mysql_username = YOURUSERNAME
# password for the user above
cygnusagent.sinks.mysql-sink.mysql_password = YOURPASSWORD
# how the attributes are stored, either per row either per column (row, column)
cygnusagent.sinks.mysql-sink.attr_persistence = column
You should also have a look at the iptables and open the MySQL port (default port is 3306)
For testing purposes you can run contextbroker in your terminal (don't forget to stop the service before) with
contextBroker -port 1026
and cygnus in other terminal (don't forget to stop the service before) with
/usr/cygnus/bin/cygnus-flume-ng agent --conf /usr/cygnus/conf/ -f /usr/cygnus/conf/YOURAGENT.CONF -n cygnusagent -Dflume.root.logger=INFO,console
(take care to change "YOURAGENT.CONF" and "cygnusagent" if you changed the name of the agent)
so you could see the output in realtime.
The database tables are not created automatically in column mode. So you would have to create the tables.
The columns look like :
recvTime - datetime ,
field1 ,
field2
....
field1_md - varchar ,
field2_md - varchar
....
If you change
cygnusagent.sinks.mysql-sink.attr_persistence = column
to
cygnusagent.sinks.mysql-sink.attr_persistence = row
tables are created automatically, but I prefer the column way to save and handle data.
I hope this helps you.

Related

How do I load a CSV file into a Db2 Event Store remotely using a Db2 client?

I see in the documentation for Db2 Event Store that a CSV file can be loaded into the system when the file is within the system in this document https://www.ibm.com/support/knowledgecenter/en/SSGNPV_2.0.0/local/loadcsv.html. I also found that you can connect to a Db2 Event Store database using the standard Db2 client in How do I connect to an IBM Db2 Event Store instance from a remote Db2 instance?. What I am trying to do now is load a CSV file using that connection. Is it possible to load it remotely ?
This should be doable with an extra keyword specified REMOTESOURCE YES, e.g:
db2 "INSERT INTO import_test SELECT * FROM EXTERNAL '/home/db2v111/data.del' USING (DELIMITER ',' REMOTESOURCE YES)"
see an example here:
IMPORT script on IBM DB2 Cloud using RUN SQL Interface
With other answers mentioned the connection and loading using the traditional db2. I have to add some more details that are required specifically for Db2 Event Store.
Assuming we are using a Db2 Client container, which can be found at docker hub with tag ibmcom/db2.
Basically we have to go through following steps:
1/ establish a remote connection from db2 client container to the remote db2 eventstore database
2/ use db2 CLP commands to load the csv file using the db2's external table load feature, which will load csv file from db2 client container to the remote eventstore database.
Step 1:
Run the following commands, or run the it in a script. Note that the commands need to be run as the db2 user in the db2 client container. The db2 user name is typically db2inst1
#!/bin/bash -x
NODE_NAME=eventstore
. /database/config/db2inst1/sqllib/db2profile
### create new keydb used for authentication
# remote old keydb files
rm -rf $HOME/mydbclient.kdb $HOME/mydbclient.sth $HOME/mydbclient.crl $HOME/mydbclient.rdb
$HOME/sqllib/gskit/bin/gsk8capicmd_64 -keydb -create -db $HOME/mydbclient.kdb -pw ${SSL_KEY_DATABASE_PASSWORD} -stash
KEYDB_PATH=/var/lib/eventstore/clientkeystore
# get the target eventstore cluster's SSL public certificate using REST api
bearerToken=`curl --silent -k -X GET "https://$IP/v1/preauth/validateAuth" -u $EVENT_USER:$EVENT_PASSWORD | python -c "import sys, json; print (json.load(sys.stdin)['accessToken']) "`
curl --silent -k -X GET -H "authorization: Bearer $bearerToken" "https://${IP}:443/com/ibm/event/api/v1/oltp/certificate" -o $HOME/server-certificate.cert
# insert eventstore cluster's SSL public cert into new gskit keydb
$HOME/sqllib/gskit/bin/gsk8capicmd_64 -cert -add -db $HOME/mydbclient.kdb -pw ${SSL_KEY_DATABASE_PASSWORD} -label server -file $HOME/server-certificate.cert -format ascii -fips
# let db2 client use the new keydb
$HOME/sqllib/bin/db2 update dbm cfg using SSL_CLNT_KEYDB $HOME/mydbclient.kdb SSL_CLNT_STASH $HOME/mydbclient.sth
# configure connection from db2Client to remote EventStore cluster.
$HOME/sqllib/bin/db2 UNCATALOG NODE ${NODE_NAME}
$HOME/sqllib/bin/db2 CATALOG TCPIP NODE ${NODE_NAME} REMOTE ${IP} SERVER ${DB2_CLIENT_PORT_ON_EVENTSTORE_SERVER} SECURITY SSL
$HOME/sqllib/bin/db2 UNCATALOG DATABASE ${EVENTSTORE_DATABASE}
$HOME/sqllib/bin/db2 CATALOG DATABASE ${EVENTSTORE_DATABASE} AT NODE ${NODE_NAME} AUTHENTICATION GSSPLUGIN
$HOME/sqllib/bin/db2 terminate
# Ensure to use correct database name, eventstore user credential in remote
# eventstore cluster
$HOME/sqllib/bin/db2 CONNECT TO ${EVENTSTORE_DATABASE} USER ${EVENT_USER} USING ${EVENT_PASSWORD}
Some important variables:
EVENTSTORE_DATABASE: database name in the remote eventstore cluster
EVENT_USER: EventStore user name remote eventstore cluster
EVENT_PASSWORD: EventStore user password remote eventstore cluster
IP: Public IP of remote eventstore cluster
DB2_CLIENT_PORT_ON_EVENTSTORE_SERVER: JDBC port of remote eventstore cluster, which is typically 18730
SSL_KEY_DATABASE_PASSWORD: keystore's password of the gskit keydb file in the db2 client container, you can set it as you like
After running the commands above, you should have established the connection between local db2 client container and the remote eventstore cluster
2/ Load csv file using external table feature of db2
After the connection between db2 client and remote eventstore cluster is established, we can issue db2 CLP commands like issuing command to any local db2 database.
For example:
// establish remote connection to eventstore database
// replace the same variables in ${} with what you used above.
CONNECT TO ${EVENTSTORE_DATABASE} USER ${EVENT_USER} USING ${EVENT_PASSWORD}
SET CURRENT ISOLATION UR
// create table in the remote eventstore database
CREATE TABLE db2cli_csvload (DEVICEID INTEGER NOT NULL, SENSORID INTEGER NOT NULL, TS BIGINT NOT NULL, AMBIENT_TEMP DOUBLE NOT NULL, POWER DOUBLE NOT NULL, TEMPERATURE DOUBLE NOT NULL, CONSTRAINT "TEST1INDEX" PRIMARY KEY(DEVICEID, SENSORID, TS) INCLUDE (TEMPERATURE)) DISTRIBUTE BY HASH (DEVICEID, SENSORID) ORGANIZE BY COLUMN STORED AS PARQUET
// external table load to remote eventstore database
INSERT INTO db2cli_csvload SELECT * FROM EXTERNAL '${DB_HOME_IN_CONTAINER}/${CSV_FILE}' LIKE db2cli_csvload USING (delimiter ',' MAXERRORS 10 SOCKETBUFSIZE 30000 REMOTESOURCE 'YES' LOGDIR '/database/logs' )
CONNECT RESET
TERMINATE
For more information, you can check on the Db2 EventStore's public github repo.
https://github.com/IBMProjectEventStore/db2eventstore-IoT-Analytics/tree/master/db2client_remote/utils

error on starting grafana service

I have grafana 4.4.3 on an ubuntu 16.4 LTS which is installed on an vm and its IP is 1.2.3.4 .
I also have a mysql database version 5.0.95 on a CentOS 5.9 and its ip is 5.5.5.5 and mydatabase name is : voip
I want to set mysql as backend for grafana. I'v changed my grafana.ini file like this:
###[database]###
type = mysql
host = 5.5.5.5:3306
name = voip
user = root
password = t#123
###[session]###
provider: mysql
provider_config = `root:t#123#tcp(5.5.5.5:3306)/voip`
I also set my root account to be used as a remote account.
when I want to start grafana-server service, it gives me this error:
Fail to initialize orm engine" logger=sqlstore error="Sqlstore::Migration
failed err: this user requires old password authentication. If you still
want to use it, please add 'allowOldPasswords=1' to your DSN. See also
https://github.com/go-sql-driver/mysql/wiki/old_passwords\n"
what should I do? Did I don anything wrong?
allowOldPasswords error is given when you are using old version of mysql database. to change this you should go to /etc/my.cnf and change oldpasswords = 1 to oldpasswords = 0. next you have to login to your mysql and then enter these commands:
SET SESSION old_passwords=FALSE;
SET PASSWORD FOR 'user_name'#'%'=PASSWORD('<put password here>');
flush privileges;
and at last, restart your mysql service.
hope this post is helpful...

MySQL/MariaDB configuration sections for client

Is there a way, how can I print default configuration variables or configuration sections for mysql/mariadb client?
I have a configuration file for example:
[client]
user = abc
passw = bcd
!include /another/my.cnf
!includedir /another/configurations/
In /another/my.cnf I have
[clientA]
user = abc
passw = bcd
host = example.com
I would like to know wheter the configuration section [clientA] exists.
Now when I connect to mysql --defaults-group-suffix=B it still connects me based on the [client] section without any warning, that suffix B is nonexistent.
Is there any command that should print me the combined my.cnf file with it's sections?
You can use my_print_defaults utility. Use of --defaults-group-suffix makes the utility read options from groups with the specified suffix alongwith the usual groups.

MySQL does not display during setup of Zabbix

I have issue during Zabbix 2.4-1 installation on ubuntu 14.04.
I used manual from official site, installation was successful, no errors.
Next I created user 'zabbix' with all privileges and database 'zabbix' and imported schema.sql, images.sql and data.sql from Zabbix archive. Then I changed configuration files in /etc/zabbix dir:
zabbix.conf.php
// Zabbix GUI configuration file
global $DB;
// Valid types are MYSQL, SQLITE3 or POSTGRESQL
$DB["TYPE"] = 'MYSQL';
$DB["SERVER"] = 'localhost';
$DB["PORT"] = '3306';
// SQLITE3 use full path to file/database: $DB["DATABASE"] = '/var/lib/zabbix/zabbix.sqlite3';
$DB["DATABASE"] = 'zabbix';
$DB["USER"] = 'zabbix';
$DB["PASSWORD"] = 'root';
// SCHEMA is relevant only for IBM_DB2 database
$DB["SCHEMA"] = '';
$ZBX_SERVER = 'localhost';
$ZBX_SERVER_PORT = '10051';
$ZBX_SERVER_NAME = '';
$IMAGE_FORMAT_DEFAULT = IMAGE_FORMAT_PNG;
zabbix-server-mysql.conf
...
# dbc_dbtype: type of underlying database to use
# this exists primarily to let dbconfig-common know what database
# type to use when a package supports multiple database types.
# don't change this value unless you know for certain that this
# package supports multiple database types
dbc_dbtype='mysql'
# dbc_dbuser: database user
# the name of the user who we will use to connect to the database.
dbc_dbuser='zabbix'
# dbc_dbpass: database user password
# the password to use with the above username when connecting
# to a database, if one is required
dbc_dbpass='root'
...
I start it with apache server. When I go to localhost/zabbix I see Zabbix Installer. On step 3 it's necessary to configure DB connection. So, here is a problem:
As you see, in select list is only PostgreSQL, so, I can't connect to database (mysql server is running) and go to the next step.
Your PHP doesn't support MySQL at the moment. You need to install some php-mysql* package and then restart your webserver.
Your php package for mysql is not there so that it will not show while zabbix setup installation
install php-mysql package to resolve this error
#apt-get install php7.0-mysql
and then restart apache
#systemctl restart apache2
I had the same problem installing Zabbix 4.0 on Ub
So I ran
apt install php libapache2-mod-php php-mysql
sudo service apache2 restart
it works for me

OpenLDAP configuration error ldap_bind: Invalid credentials (49)

I'm using Ubuntu 10.4 server and I'm trying to configure OpenLDAP as a protocol for authentication for SVN and other services. However I quite don't understand how ldap works and after setting a example config I tried to populate it without success. This is the error:
ldap_bind: Invalid credentials (49)
It seems to be example config problem, more precisely with the admin configuration. However I tried to change it using cryptographic password but got no results. Code config bellow
# Load modules for database type
dn: cn=module,cn=config
objectclass: olcModuleList
cn: module
olcModuleLoad: back_bdb.la
# Create directory database
dn: olcDatabase=bdb,cn=config
objectClass: olcDatabaseConfig
objectClass: olcBdbConfig
olcDatabase: bdb
# Domain name (e.g. home.local)
olcSuffix: dc=home,dc=local
# Location on system where database is stored
olcDbDirectory: /var/lib/ldap
# Manager of the database
olcRootDN: cn=admin,dc=home,dc=local
olcRootPW: admin
# Indices in database to speed up searches
olcDbIndex: uid pres,eq
olcDbIndex: cn,sn,mail pres,eq,approx,sub
olcDbIndex: objectClass eq
# Allow users to change their own password
# Allow anonymous to authenciate against the password
# Allow admin to change anyone's password
olcAccess: to attrs=userPassword
by self write
by anonymous auth
by dn.base="cn=admin,dc=home,dc=local" write
by * none
# Allow users to change their own record
# Allow anyone to read directory
olcAccess: to *
by self write
by dn.base="cn=admin,dc=home,dc=local" write
by * read
Have you tried to connect via CLI?
ldapsearch -x -D "cn=admin,dc=home,dc=local" -W -h <hostname>
Do check your syslog, slapd by default logs its output there.
You can also use slapcat, which must be executed locally, to know whether your database was created or not (slapd would break if otherwise, anyway). It will output the first database avaliable. Use the flag -n to extract an specific database:
slapcat -n <database number>
My bets are that you're authenticating against the wrong database.