I have successfully forked and built the Context Broker source code on a CentOS 6.9 VM and now I am trying to run the functional tests as the official documentation suggests. First, I installed the accumulator-server.py script:
$ make install_scripts INSTALL_DIR=~
Verified that it is installed:
$ accumulator-server.py -u
Usage: accumulator-server.py --host <host> --port <port> --url <server url> --pretty-print -v -u
Parameters:
--host <host>: host to use database to use (default is '0.0.0.0')
--port <port>: port to use (default is 1028)
--url <server url>: server URL to use (default is /accumulate)
--pretty-print: pretty print mode
--https: start in https
--key: key file (only used if https is enabled)
--cert: cert file (only used if https is enabled)
-v: verbose mode
-u: print this usage message
And then run the functional tests:
$ make functional_test INSTALL_DIR=~
But the test fails and exits with the message below:
024/927: 0000_ipv6_support/ipv4_ipv6_both.test ........................................................................ (FAIL 11 - SHELL-INIT exited with code 1) testHarness.sh/IPv6 IPv4 Both : (0000_ipv6_support/ipv4_ipv6_both.test)
make: *** [functional_test] Error 11
$
I checked the file ../0000_ipv6_support/ipv4_ipv6_both.shellInit.stdout for any hint on what may be going wrong but error log does not lead me anywhere:
{ "dropped" : "ftest", "ok" : 1 }
accumulator running as PID 6404
Unable to start listening application after waiting 30
Does anyone have any idea about what may be going wrong here?
I checked the script which prints the error line Unable to start listening application after waiting 30 and noticed that stderr for accumulator-server.py is logged into the /tmp folder.
The accumulator_9977_stderr file had this log: 0000_ipv6_support/ipv4_ipv6_both.shellInit: line 27: accumulator-server.py: command not found
Once I saw this log I understood the mistake I made. I was running the
functional tests with sudo and the secure_path was being used instead of my PATH variable.
So at the end, running the functional tests with the command below solved the issue for me.
$ sudo "PATH=$PATH" make functional_test INSTALL_DIR=~
This can also be solved by editing the /etc/sudoers file by:
$ sudo visudo
and modifying the secure_path value.
Related
Please, do you know how resolve this issue ?
I searched everywhere without finding.
06:45 SELinux is preventing systemd from open access on the file /root/.pm2/pm2.pid. For complete SELinux messages run: sealert -l d84a5a0b-cfcf-4cb9-918a-c0952bf70600 setroubleshoot
06:45 pm2-root.service: Can't convert PID files /root/.pm2/pm2.pid O_PATH file descriptor to proper file descriptor: Permission denied systemd 2
06:45 Failed to start PM2 process manager.
I have executed this command : sealert -l d84a5a0b-cfcf-4cb9-918a-c0952bf70600 setroubleshoot
Messages d'audit bruts
type=AVC msg=audit(1591498085.184:7731): avc: denied { open } for pid=1 comm="systemd" path="/root/.pm2/pm2.pid" dev="dm-0" ino=51695937 scontext=system_u:system_r:init_t:s0 tcontext=system_u:object_r:admin_home_t:s0 tclass=file permissive=0
PM2 Version : 4.4.0
NODE version : 12.18.0
CentOS Version : 8
my systemd service :
[Unit]
Description=PM2 process manager
Documentation=https://pm2.keymetrics.io/
After=network.target
[Service]
Type=forking
User=root
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
Environment=PATH=/sbin:/bin:/usr/sbin:/usr/bin:/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin
Environment=PM2_HOME=/root/.pm2
PIDFile=/root/.pm2/pm2.pid
Restart=on-failure
ExecStart=/usr/lib/node_modules/pm2/bin/pm2 resurrect
ExecReload=/usr/lib/node_modules/pm2/bin/pm2 reload all
ExecStop=/usr/lib/node_modules/pm2/bin/pm2 kill
[Install]
WantedBy=multi-user.target
Thank you
As said in the comments, I had the exact same issue.
To solve this, just run the following commands as root after trying to start the PM2 service (in your case, this start attempt would be systemctl start pm2-root)
ausearch -c 'systemd' --raw | audit2allow -M my-systemd
semodule -i my-systemd.pp
This looks pretty generic, but it works. These lines were suggested by SELinux itself. To get them, I had to run the command journalctl -xe after trying to start the service
Two options:
Edit the systemd file that starts pm2 and specify an alternative location for the pm2 PIDFile). You'll have to make two changes, one to tell pm2 where to place the PIDFile, and one to tell systemd where to look for it. Replace the existing PIDFile line with the following two lines
Environment=PM2_PID_FILE_PATH=/run/pm2.pid
PIDFile=/run/pm2.pid
Create an SELinux rule that allows this particular behavior. You can do that exactly as Backslash36 suggest in their answer. If you want to create the policy file yourself rather than through audit2allow,the following should work, although then you have to compile it to a usable .pp file yourself.
module pm2 1.0;
require {
type user_home_t;
type init_t;
class file read;
}
#============= init_t ==============
allow init_t user_home_t:file read;
Disclaimer:
On a old machine with Ubuntu 14.04 with Upstart as init system I have enabled the HTTP API by defining DOCKER_OPTS on /etc/default/docker. It works.
$ docker version
Client:
Version: 1.11.2
(...)
Server:
Version: 1.11.2
(...)
Problem:
This does solution does not work on a recent machine with Ubuntu 16.04 with SystemD.
As stated on the top of the recent file installed /etc/default/docker:
# Docker Upstart and SysVinit configuration file
#
# THIS FILE DOES NOT APPLY TO SYSTEMD
#
# Please see the documentation for "systemd drop-ins":
# https://docs.docker.com/engine/articles/systemd/
#
(...)
As I checked this information on the Docker documentation page for SystemD I need to fill a daemon.json file but as stated on the reference there are some properties self-explanatory but others could be under-explained.
That being said, I'm looking for help to convert this:
DOCKER_OPTS="-H tcp://0.0.0.0:2375 -H unix:///var/run/docker.sock -G myuser --debug"
to the daemon.jsonobject?
Notes
PS1: I'm aware that the daemon.json have a debug: true as default.
PS2: Probably the group: "myuser" it will work like this or with an array of strings.
PS3: My main concern is to use SOCK and HTTP simultaneous.
EDIT (8/08/2017)
After reading the accepted answer, check the #white_gecko answer for more input on the matter.
With a lot of fragmented documentation it was difficult to solve this.
My first solution was to create the daemon.json with
{
"hosts": [
"unix:///var/run/docker.sock",
"tcp://127.0.0.1:2376"
]
}
This does not worked this error docker[5586]: unable to configure the Docker daemon with file /etc/docker/daemon.json after tried to restart the daemon with service docker restart.
Note: There was more on the error that I failed to copy.
But what this error meant it at the start the daemon it a conflict with a flag and configurations on daemon.json.
When I looked into it with service docker status this it was the parent process: ExecStart=/usr/bin/docker daemon -H fd://.
What it was strange because is different with configurations on /etc/init.d/docker which I thought that were the service configurations.
The strange part it was that the file on init.d does contain any reference to daemon argument neither -H fd://.
After some research and a lot of searches of the system directories, I find out these directory (with help on the discussion on this issue docker github issue #22339).
Solution
Edited the ExecStart from /lib/systemd/system/docker.service with this new value:
/usr/bin/docker daemon
And created the /etc/docker/daemon.json with
{
"hosts": [
"fd://",
"tcp://127.0.0.1:2376"
]
}
Finally restarted the service with service docker start and now I get the "green light" on service docker status.
Tested the new configurations with:
$ docker run hello-world
Hello from Docker!
(...)
And,
$ curl http://127.0.0.1:2376/v1.23/info
[JSON]
I hope that this will help someone with a similar problem as mine! :)
I had the same problem and actually in my eyes the easiest solution which should doesn't touch any existing files, which are managed by the system update process is, to use a systemd drop-in:
Just create a file /etc/systemd/system/docker.service which overwrites the specific part of the service in /lib/systemd/system/docker.service.
In this case the content of /etc/systemd/system/docker.service would be:
[Service]
ExecStart=/usr/bin/dockerd --tlsverify --tlscacert=/etc/docker/ca.pem --tlscert=/etc/docker/server-cert.pem --tlskey=/etc/docker/server-key.pem -H=tcp://127.0.0.1:2375 -H=fd://
(You could even create a directory docker.service.d which contains multiple files to overwrite different parameters.)
After adding the file you just run:
$ sudo systemctl daemon-reload
$ sudo systemctl restart docker
The solution described at https://docs.docker.com/engine/admin/#troubleshoot-conflicts-between-the-daemonjson-and-startup-scripts works for me:
One notable example of a configuration conflict that is difficult to
troubleshoot is when you want to specify a different daemon address
from the default. Docker listens on a socket by default. On Debian and
Ubuntu systems using systemd), this means that a -H flag is always
used when starting dockerd. If you specify a hosts entry in the
daemon.json, this causes a configuration conflict (as in the above
message) and Docker fails to start.
To work around this problem, create a new file
/etc/systemd/system/docker.service.d/docker.conf with the following
contents, to remove the -H argument that is used when starting the
daemon by default.
[Service]
ExecStart=
ExecStart=/usr/bin/dockerd
Note that the line with ExecStart= is actually required, otherwise it'll fail with the error:
docker.service: Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services. Refusing.
After creating the file you must run:
sudo systemctl daemon-reload
sudo systemctl restart docker
For me worked on Ubuntu 18.04.1 LTS and Docker 18.06.0-ce create
/etc/systemd/system/docker.service.d/remote-api.conf
with following content:
[Service]
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock
then run sudo systemctl daemon-reload and sudo systemctl restart docker
See result calling:
curl http://localhost:2376/info
You might need to configure proxy, if your docker is behind a proxy.
To achiev this paste in /etc/default/docker file following:
http_proxy="http://85.22.53.71:8080/"
https_proxy="http://85.22.53.71:8080/"
HTTP_PROXY="http://85.22.53.71:8080/"
HTTPS_PROXY="http://85.22.53.71:8080/"
# below you can list some *.enterprise_domain.com as well
NO_PROXY="localhost,127.0.0.1,::1"
Or Create
/etc/systemd/system/docker.service.d/remote-api.conf with following content:
[Service]
Environment="HTTP_PROXY=http://<you_proxy_ip>:<port>"
Environment="HTTPS_PROXY=https://<you_proxy_ip>:<port>/"
Environment="NO_PROXY=localhost,127.0.0.1,::1"
I hope it helps someone...
I want to ping an external ip from all of my servers that run zabbix agent.
I searched and find some articles about zabbix user parameters.
In /etc/zabbix/zabbix_agentd.conf.d/ I created a file named userparameter_ping.conf with following content:
UserParameter=checkip[*],ping -c4 8.8.8.8 && echo 0 || echo 1
I created an item named checkip in zabbix server with a graph but got no data. After some another digging I found zabbix_get and tested my userparameter but I got the error : ZBX_NOTSUPPORTED
# zabbix_get -s 172.20.4.43 -p 10050 -k checkip
my zabbix version :
Zabbix Agent (daemon) v2.4.5 (revision 53282) (21 April 2015)
Does anybody know what I can do to address this?
After some change and talks with folks in mailing list finally it worked but how :
first i created a file in :
/etc/zabbix/zabbix_agentd.conf.d/
and add this line :
UserParameter=checkip[*],ping -W1 -c2 $1 >/dev/null 2>&1 && echo 0 || echo 1
and run this command :
./sbin/zabbix_agentd -t checkip["8.8.8.8"]
checkip[8.8.8.8] [t|0]
so everything done but Timeout option is very important for us :
add time out in /etc/zabbix/zabbix_agentd.conf
Timeout=30
Timeout default is 3s so if we run
time ping -W1 -c2 8.8.8.8
see maybe it takes more than 3s so you got error :
ZBX_NOTSUPPORTED
It can be anything. For example timeout - default timeout is 3 sec and ping -c4 requires at least 3 seconds, permission/path to ping, not restarted agent, ...
Increase debug level, restart agent and check zabbix logs. Also you can test zabbix_agentd directly:
zabbix_agentd -t checkip[]
[m|ZBX_NOTSUPPORTED] [Timeout while executing a shell script.] => Timeout problem. Edit zabbix_agentd.conf and increase Timeout settings. Default 3 seconds are not the best for your ping, which needs 3+ seconds.
If you need more than 30s for the execution, you can use the nohup (command..) & combo to curb the timeout restriction.
That way, if you generate some file with the results, in the next pass, you can read the file and get back the results without any need to wait at all.
For those who may be experiencing other issues with the same error message.
It is important to run zabbix_agentd with the -c parameter:
./sbin/zabbix_agentd -c zabbix_agentd.conf --test checkip["8.8.8.8"]
Otherwise zabbix might not pick up on the command and will thus yield ZBX_NOTSUPPORTED.
It also helps to isolate the command into a script file, as Zabbix will butcher in-line commands in UserParameter= much more than you'd expect.
I defined two user parameters like this for sync checking between to samba DCs.
/etc/zabbix/zabbix_agentd.d/userparameter_samba.conf:
UserParameter=syncma, sudo samba-tool drs replicate smb1 smb2 cn=schema,cn=configuration,dc=domain,dc=com
UserParameter=syncam, sudo samba-tool drs replicate smb2 smb1 cn=schema,cn=configuration,dc=domain,dc=com
and also provided sudoer access for Zabbix user to execute the command. /etc/sudoers.d/zabbix:
Defaults:zabbix !syslog
Defaults:zabbix !requiretty
zabbix ALL=(ALL) NOPASSWD: /usr/bin/samba-tool
zabbix ALL=(ALL) NOPASSWD: /usr/bin/systemctl
And "EnableRemoteCommands" is enabled on my zabbix_aganetd.conf, sometimes when I run
zabbix_get -s CLIENT_IP -p10050 -k syncma or
zabbix_get -s CLIENT_IP -p10050 -k syncam
I get the error ZBX_NOTSUPPORTED: Timeout while executing a shell script.
but after executing /sbin/zabbix_agentd -t syncam on the client, Zabbix server just responses normally.
Replicate from smb2 to smb1 was successful.
and when it has a problem I get below error on my zabbix.log
failed to kill [ sudo samba-tool drs replicate smb1 smb2 cn=schema,cn=configuration,dc=domain,dc=com]: [1] Operation not permitted
It seems like it is a permission error! but It just resolved after executing /sbin/zabbix_agentd -t syncam but I am not sure the error is gone permanently or will happen at the next Zabbix item check interval.
I am new to Jekyll blogging and trying to view blog locally on
http://localhost:4000
but failed.
➜ my-awesome-site > jekyll serve
Notice: for 10x faster LSI support, please install http://rb-gsl.rubyforge.org/
Configuration file: /home/Git/my-awesome-site/_config.yml
Source: /home/Git/my-awesome-site
Destination: /home/Git/my-awesome-site/_site
Generating...
done.
Configuration file: /home/Git/my-awesome-site/_config.yml
jekyll 2.2.0 | Error: Address already in use - bind(2)
I tried
$ lsof -wni tcp:3000
$ lsof -wni tcp:4000
but both of them return nothing.
My Ruby version is:
➜ my-awesome-site > ruby --version
ruby 2.0.0p451 (2014-02-24 revision 45167) [universal.x86_64-darwin13]
What should I do next? I've re-installed jekyll already but the same problem remains.
See the comments in http://jekyllrb.com/docs/usage/, should help you:
If you need to kill the server, you can kill -9 1234 where "1234" is
the PID.
If you cannot find the PID, then do, ps aux | grep jekyll
and kill the instance. Read more.
Steps here fixed it for me. I had to append 'sudo' along with the commands.
$> sudo lsof -wni tcp:4000
It will give you information of process running on tcp port 4000 which also contains PID (Process ID). Now use command below to kill the process.
$> sudo kill -9 PID
Now you can execute jekyll serve command to start your site
Try to see which process is using that port, kill it and run again or try running jekyll on different port.
If #Matifou's answer here doesn't work, do the following instead:
The fix for anyone: run jekyll serve on an unused port:
Two ways:
In your _config.yml file, specify a port other than 4000 like this, for example:
port: 4001
OR (my preferred choice), add --port 4001 to your jekyll serve command, like this, for example:
bundle exec jekyll serve --livereload --port 4001
From: https://jekyllrb.com/docs/configuration/options/#serve-command-options
See my answer here: Is it possible to serve multiple Jekyll sites locally?
My particular problem: NoMachine is interfering:
When I run:
bundle exec jekyll serve --livereload --drafts --unpublished
I get these errors:
jekyll 3.9.0 | Error: Address already in use - bind(2) for 127.0.0.1:4000
.
.
.
/usr/lib/ruby/2.7.0/socket.rb:201:in `bind': Address already in use - bind(2) for 127.0.0.1:4000 (Errno::EADDRINUSE)
ps aux | grep jekyll doesn't show any processes running except this grep command itself. So, that doesn't help.
sudo lsof -wni tcp:4000, however, shows a running nxd nx daemon process:
$ sudo lsof -wni tcp:4000
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
nxd 914803 nx 3u IPv4 7606783 0t0 TCP *:4000 (LISTEN)
nxd 914803 nx 4u IPv6 7599664 0t0 TCP *:4000 (LISTEN)
I discovered this is due to my NoMachine remote login server.
If running NoMachine, click on the NoMachine icon in the top-right of your task bar. Ex: this is on Ubuntu 20.04:
Then click on "Show server status" --> Ports, and you'll see that NoMachine is running nx on Port 4000, which is interfering:
So, use the fix above to serve jekyll on a different port, such as 4001 instead of 4000. I recommend leaving the NoMachine port settings as-is, on port 4000, because NoMachine says:
Automatic updates require that hosts with NoMachine client or server installed have access to the NoMachine update server on port 4000 and use the TCP protocol.
See also:
Is it possible to serve multiple Jekyll sites locally?
my answer
i am trying to setup mysql-proxy on ubuntu on amazon ec2
i have done following:
sudo apt-get install mysql-proxy --yes
vi /etc/default/mysql-proxy
i put following content on "/etc/default/mysql-proxy"
ENABLED="true"
OPTIONS="--proxy-lua-script=/usr/share/mysql-proxy/rw-splitting.lua
--proxy-address=127.0.0.1:3306
--proxy-backend-addresses=private_ip_of_another_ec2_db_server:3306,private_ip_of_another_ec2_db_server:3306"
also tied with "--proxy-address=private_ip_or_public_ip_of_proxy-server:3306 or 4040"
and "--proxy-backend-addresses=public_ip_of_another_ec2_db_server:3306,public_ip_of_another_ec2_db_server:3306"
after that i tried to connect proxy server from another pc using mysql like:
mysql -u some_user -pxxxxx -h proxy_server_ip
or
mysql -u some_user -pxxxxx -h proxy_server_ip -P 4040
but its not working
its showing error:
ERROR 2003 (HY000): Can't connect to MySQL server on 'ip' (10061)
i want to tell you can connect the db server remotely where i allowed remote connection to any host
i also tried /etc/init.d/mysql-proxy start or /etc/init.d/mysql-proxy restart but no result
just to inform you that /etc/init.d/mysql-proxy stop is showing failed
can anyone please help me to setup and configure mysql-proxy on ubuntu
===
Edit
i found some help from other question of stackoverflow and also according to a suggestion in the comments, have done following procedure. and it seems its working now.
i installed mysql-client and mysql-server locally(on proxy server)
then i tried to run mysql-proxy using following command:
mysql-proxy --proxy-backend-addresses=10.73.151.244:3306 --proxy-backend-addresses=10.73.198.7:3306 --proxy-address=:4040 --admin-username=root --admin-password=root --admin-lua-script=>/usr/lib/mysql-proxy/lua/admin.lua
then i tried to connect remotely to the proxy server and its working.
but it seems i need to run this command under screen because when i close the terminal proxy stops working.
Can you please tell me that do i need to run this command under screen or is there any other way to make it alive all time?
There is no need to install Mysql client or Mysql Server on your mysql-proxy.
Installing mysql-proxy does have "full daemon capabilities" compiled into it.
If your are running Ubuntu Server, you may wish to use an UPSTART service script.
This script can be copied into /etc/init/mysql-proxy.conf
# mysql-proxy.conf (Ubuntu 14.04.1) Upstart proxy configuration file for AWS RDS
# mysql-proxy - mysql-proxy job file
description "mysql-proxy upstart script"
author "shadowbq <shadowbq#gmail.com>"
# Stanzas
#
# Stanzas control when and how a process is started and stopped
# See a list of stanzas here: http://upstart.ubuntu.com/wiki/Stanzas#respawn
# When to start the service
start on runlevel [2345]
# When to stop the service
stop on runlevel [016]
# Automatically restart process if crashed
respawn
# Essentially lets upstart know the process will detach itself to the background
expect daemon
# Run before process
pre-start script
[ -d /var/run/mysql-proxy ] || mkdir -p /var/run/mysql-proxy
echo "starting mysql-proxy"
end script
# Start the process
exec /usr/bin/mysql-proxy --plugins=proxy --proxy-lua-script=/usr/share/mysql-proxy/rw-splitting.lua --log-level=debug --proxy-backend-addresses=private_ip_of_another_ec2_db_server:3306,private_ip_of_another_ec2_db_server:3306 --daemon --log-use-syslog --pid-file=/var/run/mysql-proxy/mysql-proxy.pid
In the above example I hard coded the AWS RDS server into script, instead of fiddling with defaults and config file
Install Upgraded version 0.8.5
Note:
apt repo does not have 0.8.5 so we need to download tar from mysql official site
Prerequisite :-
Create file /etc/default/mysql-proxy with following content
ENABLED="true"
OPTIONS="--defaults-file=/etc/mysql/mysql-proxy.cnf"
Installation Procedure :-
Download mysql-proxy 0.8.x
Untar in /usr/local
Update PATH environment with /usr/local/mysql-proxy-0.8.5-linux-debian6.0-x86-64bit/bin
vim /etc/environment (to update environment path)
cd /usr/local/mysql-proxy-0.8.5-linux-debian6.0-x86-64bit/bin
Run command sudo ./mysql-proxy --defaults-file=/etc/mysql/mysql-proxy.cnf
Sample mysql-proxy.cnf file
[mysql-proxy]
log-level=debug
log-file=/var/log/mysql-proxy.log
pid-file = /var/run/mysql-proxy.pid
daemon = true
--no-proxy = false
admin-username=ADMIN
admin-password=ADMIN
proxy-backend-addresses=RDS-ENDPOINT:RDS-PORT
admin-lua-script=/usr/lib/mysql-proxy/lua/admin.lua
proxy-address=0.0.0.0:4040
admin-address=localhost:4041
change host ip and port of RDS or mysql
connect to Mysql server via proxy with
mysql -h{proxy-host-ip} -P 4040 -u{mysql_username} -p