Connect Orion with specific database - fiware

I have suffered a reboot on the unexpected server, when restarting the contextrboker it does not connect with the old features at reboot. if I enter Mongodb, two databases appear: orion and orion-tests.
I would like to connect with orion-tests, this is where I had all the entities.
When I created a new entity with Fiware-Service and Fiware-ServicePath, I always created a new database with that entity, but at the moment it adds everything to Orion.
What is the problem? How can i fix this? I have more than 100 entities created in the previous database.
EDIT01
This is the information:
ps -ax | grep contextBroker
9275 pts/2 S+ 0:00 grep --color=auto contextBroker
19825 ? Ssl 0:45 contextBroker

A quick fix to connect Orion to another database use the -db parameter.
docker run fiware/orion -db orion-tests
To really fix the issue read the section on Database Administration, regularly take a database dump of your entities. All you need to do is to mongorestore the data from orion-test into orion
mongodump old_database
mongorestore --db new_database ./dump/old_database

You need to run Orion with multiservice flag enabled in order to process fiware-service header. Try using contextBroker -multiservice instead of contextBroker.

Related

mongoimport Failed: error connecting to db server: no reachable servers hosted on google compute

I am using the code below to try and import a json array of json documents. Whatever I do I get the error in the title. I am using a replica set named rs0. I am running the command from the gcompute instance running the mongod service. I tried to use both localhost and 127.0.0.1 as the local host seed. And the ip for the second replica member. As well as the external ip of the localhost.
mongoimport --db <db_name> --collection <collection_name> --username <uname> --password <pass> --host rs0/[ip_of_other_replica_member:27017],[127.0.0.1:27017] --type json --file "/tmp/json_backup_wilf17/json_array (10).json" --jsonArray --authenticationDatabase <db_name(same as --db)>
as mentioned I keep getting Failed: error connecting to db server: no reachable servers .
mongod is running. I can log into the mongo shell. I tried using rs.slaveOk() and am now officialy out of ideas.
I got myself into this situation now as well (manually created cluster) when forgetting the rs.initiate() call.
Try checking that you actually have a replica set configuration.
After a long time looking at this, it was the most idiotic of reasons:
--file "/tmp/json_backup_wilf17/json_array (10).json"
notice the space between the 'y' in 'array and the '(' in '(`10)' ? eliminate it and it will work.
It's a matter of ambiguous error on behalf of Mongodb

snmptrapd doesn't log in mysql

I am trying to log snmp traps to mysql db, but unfortunately without results.
OS - Debian
Net-SNMP v.5.7.3
MySQL 5.1
I am using snmptrapd and did the configuration from here
here is my snmptrapd.conf:
authCommunity log public
sqlMaxQueue 1
sqlSaveInterval 9
I did
./configure --with-defaults --with-mysql
as in the manual . Then
make
make install
Here is my ~/.my.cnf:
[snmptrapd]
user=snmp
password=******
host=localhost
my /default/snmpd:
#export MIBS=
#SNMPDRUN=yes
#SNMPDOPTS='-Lsd -Lf /dev/null -u snmp -g snmp -I -smux -p /var/run/snmpd.pid'
TRAPDRUN=yes
TRAPDOPTS='-Lsd -p /var/run/snmptrapd.pid'
SNMPDCOMPAT=yes
I have exact DB schema as in the manual
I have success logging into syslog, but nothing in mysql. Even mysql log doesn't show anything. It's looks like snmptrapd doesnt reach MySQL
Can anyone give me idea what i am missing?
I found the solution of my problem.
I have been changing /etc/snmp/snmptrapd.conf and mysql logging didnt worked. I just find that there is another snmptrapd.conf in /usr/local/etc/snmp/snmptrapd.conf that i fill with the configuration showed in my first post.
So far it works!
I have found an article for you:
http://ethertype.blogspot.com/2015/10/logging-snmp-traps-to-mysqlmariadb.html
You must set your database name as "net_snmp".

aws emr hive metastore configure hive-site.xml

I'm trying to configure hive-site.xml to have MySQL outside of the local MySQL on EMR. How can I modify an existing cluster configuration to add hive-site.xml from S3?
http://docs.aws.amazon.com/ElasticMapReduce/latest/DeveloperGuide/emr-dev-create-metastore-outside.html
I'm not sure what you mean by "add hive-site.xml from S3". If you're just looking to get the file off of S3 and into your conf directory, you can do that with the aws-cli while logged into your cluster,
aws s3 cp s3://path/to/hive-site.xml ~/conf
More detailed instructions on migrating an existing EMR cluster's Hive MetaStore to an external service like RDS can be found below
--
Setting up an existing EMR cluster to look at an outside MySQL database is very easy. First, you'll need to dump your MySQL database that's running on your Master node to keep your existing schema information. Assuming you've a large amount of ephemeral storage and your database socket is located at /var/lib/mysql/mysql.sock:
mysqldump -S /var/lig/mysql/mysql.sock hive > /media/ephemeral0/backup.sql
Then you'll need to import this into your outside MySQL instance. If this is in RDS, you'll first need to create the hive database and then import your data into it:
mysql -h rds_host -P 3306 -u rds_master_user -prds_password mysql -e "create database hive"
and,
mysql -h rds_host -P 3306 -u rds_master_user -prds_password hive < /media/ephemeral0/backup.sql
Next up, you'll need to create a user for hive to use. Log into your outside MySQL instance and execute the following statement (with a better username and password):
grant all privileges on hive.* to 'some_hive_user'#'%' identified by 'some_password'; flush privileges;
Lastly, create/make the same changes to hive-site.xml as outlined in the documentation you cited (filling in the proper host, user, and password information) and restart your MetaStore. To restart your MetaStore, kill the already running MetaStore process and start a new one.
ps aux | grep MetaStore
kill pid
hive --service metastore&
If you are in EMR 3.x, you can just use the method in the link you provide(using bootstrap action).
If you are in ERM 4.x+, then that bootstrap action is not available. You could
either add the custom properies thru EMR --configuration with a xxx.json file. The benefit is straightforward. The con is all the config properties you added this way will be on the aws web console which is not ideal if you have things like metastore database credentials there since you are using external metastore.
or you add a Step after cluster is up to overwrite your hive.xml from S3, then another Step to execute sudo reload hive-server2 to restart hive server to get the new config.

Reverse tunnel works manually, not for replication

My MASTER mysql server is on a local network, and I have a new slave which is remote (i.e. on the internet). As MASTER does not have an accessible IP, I gathered from the docs that I should establish a reverse tunnel. I execute this:
ssh -f -N -T -R 7777:localhost:3306 user#slave.slave.com
on the MASTER. The connection seems to work - I can go to the slave and connect
with mysql to the MASTER without problem. For some reason though, replication does
not start. MASTER is already replicating to two other slaves without problems - seems the configuration is correct there.
I initiated replication on the slave as:
CHANGE MASTER TO MASTER_HOST='127.0.0.1',
MASTER_PORT=7777,
MASTER_USER='my_repl',
MASTER_PASSWORD='xxxxx',
MASTER_LOG_FILE='mysql-bin.nnnnn',
MASTER_LOG_POS=mm;
SLAVE STATUS reports mysql trying to connect to the remote, but never succeeding:
error connecting to master 'my_repl#127.0.0.1:7777' - retry-time: 60 retries: 86400
Can anyone suggest how to diagnose this problem?
BTW: OS is Linux.
My apologies... I didn't realize I had to define a new user with 127.0.0.1 as
IP.
So, 'intranet' connections use
replication_user#machine_name
as id, the connection which comes through the reverse tunnel uses
replication_user#127.0.0.1
as id. Both have to be declared to mysql separately. The rest of the info in the original message is valid - maybe this helps someone...
Greetings,
John
PS: Forgot to mention - I debugged this remotely (both MASTER and SLAVE are remote to me) using tcpdump:
tcpdump -i lo 'tcp port 7777'
on the SLAVE side, and
tcpdump -i lo 'tcp port 3306'
on the MASTER (of course that would not be very useful when there is much traffic).

Is there a way to start up MySQL Instance before Apache Instance on Scalr?

I am using Scalr for scaling the website server.
On the Apache server, I have installed Sakai, and created an boot-up script for Linux machine.
The question is, how can I ensure that MySQL Instance is booted up and running before the Apache server is booted up, because if Apache server gets booted up first, then the connection for running Sakai will fail, and that causes all sorts of problems.
How I can ensure the instance start at the way I need it to start? I am still new to Scalr so any help would be appreciated.
Thanks
If you wrote the Apache startup-script yourself, you can include a check if the database instance is already running.
You can include a simple wait-loop:
MYSQL_OK=1
while ["$MYSQL_OK" -ne 0] ; do
echo "SELECT version();" | mysql -utestuser -ptestpassword testdb
MYSQL_OK=$?
sleep 5
done
Obivously you have to create some testuser and the test-database in Mysql:
CREATE DATABASE testdb;
GRANT USAGE,SELECT ON testdb.* TO 'testuser'#'localhost' IDENTIFIED BY 'testpassword';
FLUSH PRIVILEGES;
Simply put the while-loop somewhere in the start) part of your script. If your system is some kind of Redhat-system, you will notice that the start-script /etc/init.d/httpd has a line like this:
Required-Start: $local_fs $remote_fs $network $named
If you add $mysqld to that line, Apache will insist on a running mysqld before startup:
Required-Start: $local_fs $remote_fs $network $named $mysqld
However, the disadvantage is that the Apache startup will fail instead of waiting for running mylsqd.
Good luck,
Alex.