Orion Context Broker already running error - fiware

I have a problem when i start orion context broker as a docker container, it is always exiting, it says that Orion is already running, how can i fix this issue?
srdjan-orion-1 | time=2022-07-27T09:53:34.508Z | lvl=ERROR | corr=N/A | trans=N/A | from=N/A | srv=N/A | subsrv=N/A | comp=Orion | op=contextBroker.cpp[432]:pidFile | msg=PID-file '/tmp/contextBroker.pid' found. A broker seems to be running already
srdjan-orion-1 exited with code 1
When i run sudo ps aux | grep contextBroker this is the result: srdjan 31012 0.0 0.0 11568 652 pts/0 S+ 11:56 0:00 grep --color=auto contextBroker
The problem is also when i execute this command, the first number that appears after my username is always changing, and when i try to execute the kill order this is the result: sudo kill 31012 kill: (31012): No such process
and also this: sudo kill 11568 kill: (11568): No such process
Thanks for the help!

Related

Issues with helm install Orion Context Broker

I'm trying to install FIWARE Orion on AKS using your Helm chart. I installed MongoDB using
helm repo add azure-marketplace https://marketplace.azurecr.io/helm/v1/repo
helm install my-release azure-marketplace/mongodb
Consequently I configured the MongoDB in values.yaml as follows:
## database configuration
db:
# -- configuration of the mongo-db hosts. if multiple hosts are inserted, its assumed that mongo is running as a replica set
hosts: [my-release-mongodb]
# - my-release-mongodb
# -- the db to use. if running in multiservice mode, its used as a prefix.
name: orion
# -- Database authentication (not needed if MongoDB doesn't use --auth)
auth:
# --user for connecting mongo
user: root
# -- password to be used on mongo
password: mypasswd
# -- the MongoDB authentication mechanism to use in the case user and password is set
#mech: SCRAM-SHA-1
I use the command : helm install test orion
As I see this error in the pod logging I suppose something is wrong;
kubectl logs test-orion-7dfcc9c7fb-8vbgw
time=2021-05-28T19:50:29.737Z | lvl=ERROR | corr=N/A | trans=N/A | from=N/A | srv=N/A | subsrv=N/A | comp=Orion | op=mongocContextCachePersist.cpp[59]:mongocContextCachePersist | msg=Database Error (persisting context: command insert requires authentication)
Can you help me with this please?
Kind regards,
Johan,
you should assure that mongo-db is actually available at "my-release-mongodb:27017", you can use "kubectl get services" for that. Beside that, assure that "root:mypasswd" are actually the credentials setup at mongodb.

openstack compute service list --service nova-compute empty

After the installation of nova-compute on compute node, it failed to start and this command from the controller node return an empty result
openstack compute service list --service nova-compute
And the nova-compute.log file contain these two messages:
018-11-19 12:06:05.446 986 INFO os_vif [-] Loaded VIF plugins: ovs, linux_bridge
2018-11-19 12:30:13.784 1140 INFO os_vif [-] Loaded VIF plugins: ovs, linux_bridge
openstack compute service list :
return three service components for the controller with a down state
+----+------------------+------------+----------+---------+-------+----------------------------+
| ID | Binary | Host | Zone | Status | State | Updated At
+----+------------------+------------+----------+---------+-------+----------------------------+
| 2 | nova-conductor | Controller | internal | enabled | down | 2018-11-17T17:32:48.000000 |
| 4 | nova-scheduler | Controller | internal | enabled | down | 2018-11-17T17:32:49.000000 |
| 5 | nova-consoleauth | Controller | internal | enabled | down | None
+----+------------------+------------+----------+---------+-------+----------------------------+
service nova-compute status :
Active
How can i resolve these problems ?
This is because you might have missed to create the databases for nova_cell0.
# mysql -u root -p
MariaDB [(none)]> CREATE DATABASE nova_cell0;
MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'#'localhost' \ IDENTIFIED BY 'NOVA_DBPASS';
MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'#'%' \ IDENTIFIED BY 'NOVA_DBPASS';
#su -s /bin/sh -c "nova-manage cell_v2 map_cell0" nova
# su -s /bin/sh -c "nova-manage cell_v2 create_cell --name=cell1 --verbose" nova
109e1d4b-536a-40d0-83c6-5f121b82b650
# su -s /bin/sh -c "nova-manage db sync" nova
# nova-manage cell_v2 list_cells
#su -s /bin/sh -c "nova-manage api_db sync" nova
make sure in /etc/nova/nova.conf in compute node you have added following configuration:
[DEFAULT]
enabled_apis = osapi_compute,metadata
transport_url = rabbit://openstack:RABBIT_PASS#controller
Then restart compute services.
the try the command openstack compute service list.
this solution also holds good when openstack compute service list is empty or nova hypervisor list is empty.

Why so many number of open file descriptor with MySQL 5.6.38 on centos?

I have two mysql instance running with --open-files-limit=65536. But it got ~193644 open file descriptor with lsof command?
$ lsof -n | grep mysql | wc -l
196410
$ lsof -n | grep mysql | grep ".MYI" | wc -l
83240
$ lsof -n | grep mysql | grep ".MYD" | wc -l
74053
$ sysctl fs.file-max
fs.file-max = 790612
$ lsof -n | wc -l
224647
Why there are so many open file descriptor? what could be the root cause of it? How to debug more?
Problem is with lsof version. I had lsof-4.87 on centos7 which is showing thread information and so it is duplicating open connections per thread. I changed lsof-4.82 & number got reduced

User process shadows mysqld service, how do I find the bad command?

Kubuntu 17
When I login and do "ps -ef | grep mysqld" I see two copies running, one as the system service daemon, and one as a user process.
ps -ef | grep mysqld
mysql 1953 1 0 09:02 ? 00:00:13 /usr/sbin/mysqld
richard 3233 3220 0 09:13 ? 00:00:35 /usr/sbin/mysqld --
defaults-file=/home/richard/.local/share/akonadi/mysql.conf --
datadir=/home/richard/.local/share/akonadi/db_data/ --
socket=/tmp/akonadi-richard.EK3Z9U/mysql.socket
richard 13309 3138 0 18:05 pts/1 00:00:00 grep --color=auto
mysqld
I don't see any script or command with "mysqld" in ~/.profle, ~/.autostart, ~/.bashrc, /etc/profile, /etc/init.d (except for the mysql start script presumably used by the system and owned by root.
Where else should I look for the errant command?
Any great ideas on how to look for it effectively?

Caffe: GPU CUDA error after training: Check failed: error == cudaSuccess (30 vs. 0) unknown error

Sometimes after the training or when I stop the training manually by pressing CTRL + C I get this cuda error:
Check failed: error == cudaSuccess (30 vs. 0) unknown error
This only started to happen recently, though. Does anyone have experienced that before or do you know how to fix this or what the problem is?
Complete log:
I1027 09:29:37.779079 11959 caffe.cpp:217] Using GPUs 0
I1027 09:29:37.780676 11959 caffe.cpp:222] GPU 0: �|���
F1027 09:29:37.780697 11959 common.cpp:151] Check failed: error == cudaSuccess (30 vs. 0) unknown error
*** Check failure stack trace: ***
# 0x7f6cc4f465cd google::LogMessage::Fail()
# 0x7f6cc4f48433 google::LogMessage::SendToLog()
# 0x7f6cc4f4615b google::LogMessage::Flush()
# 0x7f6cc4f48e1e google::LogMessageFatal::~LogMessageFatal()
# 0x7f6cc5558032 caffe::Caffe::SetDevice()
# 0x40b3f8 train()
# 0x407590 main
# 0x7f6cc3eb7830 __libc_start_main
# 0x407db9 _start
# (nil) (unknown)
Use nvidia-smi command to see which programs are running on GPU & CPU. If you see any unwanted instance of caffe is running still after pressing ctrl+c is pressed you should kill those with process id. Like below:
+------------------------------------------------------+
| NVIDIA-SMI 352.63 Driver Version: 352.63 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
|===============================+======================+======================|
| 0 GeForce GTX 980 Ti Off | 0000:01:00.0 On | N/A |
| 58% 83C P2 188W / 260W | 1164MiB / 6142MiB | 96% Default |
+-------------------------------+----------------------+----------------------+
| 1 GeForce GTX 980 Ti Off | 0000:02:00.0 Off | N/A |
| 53% 73C P2 127W / 260W | 585MiB / 6143MiB | 35% Default |
+-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+
| Processes: GPU Memory |
| GPU PID Type Process name Usage |
|=============================================================================|
| 0 1101 C ...-xx/build/tools/caffe 788MiB |
| 0 1570 G /usr/bin/X 235MiB |
| 0 1594 C /usr/bin/python 102MiB |
| 0 2387 G compiz 10MiB |
| 0 3984 G /usr/local/MATLAB/R2016a/bin/glnxa64/MATLAB 2MiB |
| 1 25056 C /usr/bin/caffe 563MiB |
+-----------------------------------------------------------------------------+
you should kill with this command sudo kill -9 1101
try to do make all --> make test --> make runtest. it should work
After running Make all, noticed some errors regarding libcudnn libs, I had them duplicathed in /usr/lib/x86_64-linux-gnu and /usr/local/cuda-8.0/lib64. After leaving only the ones in /usr/lib/x86_64-linux-gnu and restarting the laptop everything worked.
CUDA runtime error (30) might show if your program is unable to create or open the /dev/nvidia-uvm device file. This is usually fixed by installing package nvidia-modprobe:
sudo apt-get install nvidia-modprobe
Try to reinstall/build the nvidia driver for current kernel
sudo apt-get install --reinstall nvidia-375
sudo apt-get install nvidia-modprobe
CUDA runtime error (30) might show if your program is unable to create or open the /dev/nvidia-uvm device file. This is usually fixed by installing package nvidia-modprobe:
(Source)