How to add `--registry-mirror` when starting docker from "Docker quickstart terminal"? - configuration

From the docker distribution document: https://github.com/docker/distribution
It says to configure the docker to use the mirror, we should:
Configuring the Docker daemon
You will need to pass the --registry-mirror option to your Docker daemon on startup:
docker --registry-mirror=https://<my-docker-mirror-host> daemon
I'm newbie to docker, and I start docker from mac normal by the provided "Docker Quickstart Termial" app, which actaully invokes a start.sh shell:
#!/bin/bash
VM=default
DOCKER_MACHINE=/usr/local/bin/docker-machine
VBOXMANAGE=/Applications/VirtualBox.app/Contents/MacOS/VBoxManage
BLUE='\033[0;34m'
GREEN='\033[0;32m'
NC='\033[0m'
unset DYLD_LIBRARY_PATH
unset LD_LIBRARY_PATH
clear
if [ ! -f $DOCKER_MACHINE ] || [ ! -f $VBOXMANAGE ]; then
echo "Either VirtualBox or Docker Machine are not installed. Please re-run the Toolbox Installer and try again."
exit 1
fi
$VBOXMANAGE showvminfo $VM &> /dev/null
VM_EXISTS_CODE=$?
if [ $VM_EXISTS_CODE -eq 1 ]; then
echo "Creating Machine $VM..."
$DOCKER_MACHINE rm -f $VM &> /dev/null
rm -rf ~/.docker/machine/machines/$VM
$DOCKER_MACHINE create -d virtualbox --virtualbox-memory 2048 --virtualbox-disk-size 204800 $VM
else
echo "Machine $VM already exists in VirtualBox."
fi
VM_STATUS=$($DOCKER_MACHINE status $VM)
if [ "$VM_STATUS" != "Running" ]; then
echo "Starting machine $VM..."
$DOCKER_MACHINE start $VM
yes | $DOCKER_MACHINE regenerate-certs $VM
fi
echo "Setting environment variables for machine $VM..."
clear
cat << EOF
## .
## ## ## ==
## ## ## ## ## ===
/"""""""""""""""""\___/ ===
~~~ {~~ ~~~~ ~~~ ~~~~ ~~~ ~ / ===- ~~~
\______ o __/
\ \ __/
\____\_______/
EOF
echo -e "${BLUE}docker${NC} is configured to use the ${GREEN}$VM${NC} machine with IP ${GREEN}$($DOCKER_MACHINE ip $VM)${NC}"
echo "For help getting started, check out the docs at https://docs.docker.com"
echo
eval $($DOCKER_MACHINE env $VM --shell=bash)
USER_SHELL=$(dscl /Search -read /Users/$USER UserShell | awk '{print $2}' | head -n 1)
if [[ $USER_SHELL == *"/bash"* ]] || [[ $USER_SHELL == *"/zsh"* ]] || [[ $USER_SHELL == *"/sh"* ]]; then
$USER_SHELL --login
else
$USER_SHELL
fi
Is it the correct file that I can put my '--registry-mirror' config to it? What should I do?

If you do a docker-machine create --help:
docker-machine create --help
Usage: docker-machine create [OPTIONS] [arg...]
Create a machine.
Run 'docker-machine create --driver name' to include the create flags for that driver in the help text.
Options:
...
--engine-insecure-registry [--engine-insecure-registry option --engine-insecure-registry option] Specify insecure registries to allow with the created en
gine
--engine-registry-mirror [--engine-registry-mirror option --engine-registry-mirror option] Specify registry mirrors to use
So you can modify your script to add one more parameter:
--engine-registry-mirror=...
However, since your 'default' docker-machine probably already exists (do a docker-machine ls), you might need to remove it first (docker-machine rm default: make sure you can easily recreate your images from your local Dockerfiles, and/or that you don't have data container that would need to be saved first)

Open C:\Users\<YourName>\.docker\daemon.json, edit the "registry-mirrors" entry in that file.
{"registry-mirrors":["https://registry.docker-cn.com"],"insecure-registries":[], "debug":true, "experimental": true}

Related

Inject .sql files in order using a docker-compose

I'm running an MySQL server docker container using a docker-compose YAML file.
Here is how the file looks like:
version: '3.1'
services:
db:
image: mysql
command: --default-authentication-plugin=mysql_native_password
restart: always
volumes:
- ./mysql-dump/samples:/docker-entrypoint-initdb.d
environment:
MYSQL_ROOT_PASSWORD: example
MYSQL_DATABASE: db_example
adminer:
image: adminer
restart: always
ports:
- 8080:8080
In the db service, the volumne is set to ./mysql-dump/samples:/docker-entrypoint-initdb.d this takes .sql files from ./mysql-dump/sample to inject them to the database.
In my case I have two files file2.sql for the sql schema of the database, and file1.sql for the data.
Since the file appear to be injected in order, I get a NO SUCH TABLE ERROR, surely because the schema is injected last (because it's name is file2.sql)
Is there a way to reverse the order of the injection beside changing the names of the files?
If you go through the documentation of mysql Dockerhub it clearly mentioned that it will dump file in alphabetical order.
When a container is started for the first time, a new database with
the specified name will be created and initialized with the provided
configuration variables. Furthermore, it will execute files with
extensions .sh, .sql and .sql.gz that are found in
/docker-entrypoint-initdb.d. Files will be executed in alphabetical
order. You can easily populate your mysql services by mounting a SQL
dump into that directory and provide custom images with contributed
data. SQL files will be imported by default to the database specified
by the MYSQL_DATABASE variable.
You need to replace file name, suppose db.sql and table.sql so it will first dump db.sql then table.sql
Updated:
To reverse the order of MySQL dump, you have to modify the docker file and entry point.
FROM mysql:8
#From mysql
COPY docker-entrypoint.sh /usr/local/bin/
RUN chmod +x /usr/local/bin/docker-entrypoint.sh
ENTRYPOINT ["/usr/local/bin/docker-entrypoint.sh"]
EXPOSE 3306 33060
CMD ["mysqld"]
ENTRYPOINT:
#!/bin/bash
set -x
set -eo pipefail
shopt -s nullglob
# if command starts with an option, prepend mysqld
if [ "${1:0:1}" = '-' ]; then
set -- mysqld "$#"
fi
# skip setup if they want an option that stops mysqld
wantHelp=
for arg; do
case "$arg" in
-'?'|--help|--print-defaults|-V|--version)
wantHelp=1
break
;;
esac
done
# usage: file_env VAR [DEFAULT]
# ie: file_env 'XYZ_DB_PASSWORD' 'example'
# (will allow for "$XYZ_DB_PASSWORD_FILE" to fill in the value of
# "$XYZ_DB_PASSWORD" from a file, especially for Docker's secrets feature)
file_env() {
local var="$1"
local fileVar="${var}_FILE"
local def="${2:-}"
if [ "${!var:-}" ] && [ "${!fileVar:-}" ]; then
echo >&2 "error: both $var and $fileVar are set (but are exclusive)"
exit 1
fi
local val="$def"
if [ "${!var:-}" ]; then
val="${!var}"
elif [ "${!fileVar:-}" ]; then
val="$(< "${!fileVar}")"
fi
export "$var"="$val"
unset "$fileVar"
}
# usage: process_init_file FILENAME MYSQLCOMMAND...
# ie: process_init_file foo.sh mysql -uroot
# (process a single initializer file, based on its extension. we define this
# function here, so that initializer scripts (*.sh) can use the same logic,
ls -r
process_init_file() {
local f="$1"; shift
local mysql=( "$#" )
case "$f" in
*.sh) echo "$0: running $f"; . "$f" ;;
*.sql) echo "$0: running $f"; "${mysql[#]}" < "$f"; echo ;;
*.sql.gz) echo "$0: running $f"; gunzip -c "$f" | "${mysql[#]}"; echo ;;
*) echo "$0: ignoring $f" ;;
esac
echo
}
_check_config() {
toRun=( "$#" --verbose --help )
if ! errors="$("${toRun[#]}" 2>&1 >/dev/null)"; then
cat >&2 <<-EOM
ERROR: mysqld failed while attempting to check config
command was: "${toRun[*]}"
$errors
EOM
exit 1
fi
}
# Fetch value from server config
# We use mysqld --verbose --help instead of my_print_defaults because the
# latter only show values present in config files, and not server defaults
_get_config() {
local conf="$1"; shift
"$#" --verbose --help --log-bin-index="$(mktemp -u)" 2>/dev/null \
| awk '$1 == "'"$conf"'" && /^[^ \t]/ { sub(/^[^ \t]+[ \t]+/, ""); print; exit }'
# match "datadir /some/path with/spaces in/it here" but not "--xyz=abc\n datadir (xyz)"
}
# allow the container to be started with `--user`
if [ "$1" = 'mysqld' -a -z "$wantHelp" -a "$(id -u)" = '0' ]; then
_check_config "$#"
DATADIR="$(_get_config 'datadir' "$#")"
mkdir -p "$DATADIR"
chown -R mysql:mysql "$DATADIR"
exec gosu mysql "$BASH_SOURCE" "$#"
fi
if [ "$1" = 'mysqld' -a -z "$wantHelp" ]; then
# still need to check config, container may have started with --user
_check_config "$#"
# Get config
DATADIR="$(_get_config 'datadir' "$#")"
if [ ! -d "$DATADIR/mysql" ]; then
file_env 'MYSQL_ROOT_PASSWORD'
if [ -z "$MYSQL_ROOT_PASSWORD" -a -z "$MYSQL_ALLOW_EMPTY_PASSWORD" -a -z "$MYSQL_RANDOM_ROOT_PASSWORD" ]; then
echo >&2 'error: database is uninitialized and password option is not specified '
echo >&2 ' You need to specify one of MYSQL_ROOT_PASSWORD, MYSQL_ALLOW_EMPTY_PASSWORD and MYSQL_RANDOM_ROOT_PASSWORD'
exit 1
fi
mkdir -p "$DATADIR"
echo 'Initializing database'
"$#" --initialize-insecure
echo 'Database initialized'
if command -v mysql_ssl_rsa_setup > /dev/null && [ ! -e "$DATADIR/server-key.pem" ]; then
# https://github.com/mysql/mysql-server/blob/23032807537d8dd8ee4ec1c4d40f0633cd4e12f9/packaging/deb-in/extra/mysql-systemd-start#L81-L84
echo 'Initializing certificates'
mysql_ssl_rsa_setup --datadir="$DATADIR"
echo 'Certificates initialized'
fi
SOCKET="$(_get_config 'socket' "$#")"
"$#" --skip-networking --socket="${SOCKET}" &
pid="$!"
mysql=( mysql --protocol=socket -uroot -hlocalhost --socket="${SOCKET}" )
for i in {30..0}; do
if echo 'SELECT 1' | "${mysql[#]}" &> /dev/null; then
break
fi
echo 'MySQL init process in progress...'
sleep 1
done
if [ "$i" = 0 ]; then
echo >&2 'MySQL init process failed.'
exit 1
fi
if [ -z "$MYSQL_INITDB_SKIP_TZINFO" ]; then
# sed is for https://bugs.mysql.com/bug.php?id=20545
mysql_tzinfo_to_sql /usr/share/zoneinfo | sed 's/Local time zone must be set--see zic manual page/FCTY/' | "${mysql[#]}" mysql
fi
if [ ! -z "$MYSQL_RANDOM_ROOT_PASSWORD" ]; then
export MYSQL_ROOT_PASSWORD="$(pwgen -1 32)"
echo "GENERATED ROOT PASSWORD: $MYSQL_ROOT_PASSWORD"
fi
rootCreate=
# default root to listen for connections from anywhere
file_env 'MYSQL_ROOT_HOST' '%'
if [ ! -z "$MYSQL_ROOT_HOST" -a "$MYSQL_ROOT_HOST" != 'localhost' ]; then
# no, we don't care if read finds a terminating character in this heredoc
# https://unix.stackexchange.com/questions/265149/why-is-set-o-errexit-breaking-this-read-heredoc-expression/265151#265151
read -r -d '' rootCreate <<-EOSQL || true
CREATE USER 'root'#'${MYSQL_ROOT_HOST}' IDENTIFIED BY '${MYSQL_ROOT_PASSWORD}' ;
GRANT ALL ON *.* TO 'root'#'${MYSQL_ROOT_HOST}' WITH GRANT OPTION ;
EOSQL
fi
"${mysql[#]}" <<-EOSQL
-- What's done in this file shouldn't be replicated
-- or products like mysql-fabric won't work
SET ##SESSION.SQL_LOG_BIN=0;
ALTER USER 'root'#'localhost' IDENTIFIED BY '${MYSQL_ROOT_PASSWORD}' ;
GRANT ALL ON *.* TO 'root'#'localhost' WITH GRANT OPTION ;
${rootCreate}
DROP DATABASE IF EXISTS test ;
FLUSH PRIVILEGES ;
EOSQL
if [ ! -z "$MYSQL_ROOT_PASSWORD" ]; then
mysql+=( -p"${MYSQL_ROOT_PASSWORD}" )
fi
file_env 'MYSQL_DATABASE'
if [ "$MYSQL_DATABASE" ]; then
echo "CREATE DATABASE IF NOT EXISTS \`$MYSQL_DATABASE\` ;" | "${mysql[#]}"
mysql+=( "$MYSQL_DATABASE" )
fi
file_env 'MYSQL_USER'
file_env 'MYSQL_PASSWORD'
if [ "$MYSQL_USER" -a "$MYSQL_PASSWORD" ]; then
echo "CREATE USER '$MYSQL_USER'#'%' IDENTIFIED BY '$MYSQL_PASSWORD' ;" | "${mysql[#]}"
if [ "$MYSQL_DATABASE" ]; then
echo "GRANT ALL ON \`$MYSQL_DATABASE\`.* TO '$MYSQL_USER'#'%' ;" | "${mysql[#]}"
fi
echo 'FLUSH PRIVILEGES ;' | "${mysql[#]}"
fi
echo
ls -r /docker-entrypoint-initdb.d/ > /dev/null
for f in $(ls -r /docker-entrypoint-initdb.d/*); do
process_init_file "$f" "${mysql[#]}"
done
if [ ! -z "$MYSQL_ONETIME_PASSWORD" ]; then
"${mysql[#]}" <<-EOSQL
ALTER USER 'root'#'%' PASSWORD EXPIRE;
EOSQL
fi
if ! kill -s TERM "$pid" || ! wait "$pid"; then
echo >&2 'MySQL init process failed.'
exit 1
fi
echo
echo 'MySQL init process done. Ready for start up.'
echo
fi
fi
exec "$#"
If you run the container, You will see the file is in processing reverse order

MySQL replication monitor - Seconds_Behind_Master

I'm using Nagios and the check_mysql_health plugin to monitor my MySQL databases. I need to keep an eye on my Seconds_Behind_Master values in my replicated databases, but I am unable to use SHOW SLAVE STATUS in a subquery to get at that value specifically. Does anyone know another way to get at the value of the Seconds_Behind_Master values of my slave databases as a single value? For the check_mysql_health plugin to work I need to return just a single numeric value that will be monitored.
#!/bin/bash
#########################################################################
# Script: check_mysql_slavestatus.sh #
# Author: Claudio Kuenzler www.claudiokuenzler.com #
# Purpose: Monitor MySQL Replication status with Nagios #
# Description: Connects to given MySQL hosts and checks for running #
# SLAVE state and delivers additional info #
# Original: This script is a modified version of #
# check mysql slave sql running written by dhirajt #
# Thanks to: Victor Balada Diaz for his ideas added on 20080930 #
# Soren Klintrup for stuff added on 20081015 #
# Marc Feret for Slave_IO_Running check 20111227 #
# Peter Lecki for his mods added on 20120803 #
# Serge Victor for his mods added on 20131223 #
# Omri Bahumi for his fix added on 20131230 #
# History: #
# 2008041700 Original Script modified #
# 2008041701 Added additional info if status OK #
# 2008041702 Added usage of script with params -H -u -p #
# 2008041703 Added bindir variable for multiple platforms #
# 2008041704 Added help because mankind needs help #
# 2008093000 Using /bin/sh instead of /bin/bash #
# 2008093001 Added port for MySQL server #
# 2008093002 Added mysqldir if mysql binary is elsewhere #
# 2008101501 Changed bindir/mysqldir to use PATH #
# 2008101501 Use $() instead of `` to avoid forks #
# 2008101501 Use ${} for variables to prevent problems #
# 2008101501 Check if required commands exist #
# 2008101501 Check if mysql connection works #
# 2008101501 Exit with unknown status at script end #
# 2008101501 Also display help if no option is given #
# 2008101501 Add warning/critical check to delay #
# 2011062200 Add perfdata #
# 2011122700 Checking Slave_IO_Running #
# 2012080300 Changed to use only one mysql query #
# 2012080301 Added warn and crit delay as optional args #
# 2012080302 Added standard -h option for syntax help #
# 2012080303 Added check for mandatory options passed in #
# 2012080304 Added error output from mysql #
# 2012080305 Changed from 'cut' to 'awk' (eliminate ws) #
# 2012111600 Do not show password in error output #
# 2013042800 Changed PATH to use existing PATH, too #
# 2013050800 Bugfix in PATH export #
# 2013092700 Bugfix in PATH export #
# 2013092701 Bugfix in getopts #
# 2013101600 Rewrite of threshold logic and handling #
# 2013101601 Optical clean up #
# 2013101602 Rewrite help output #
# 2013101700 Handle Slave IO in 'Connecting' state #
# 2013101701 Minor changes in output, handling UNKWNON situations now #
# 2013101702 Exit CRITICAL when Slave IO in Connecting state #
# 2013123000 Slave_SQL_Running also matched Slave_SQL_Running_State #
#########################################################################
# Usage: ./check_mysql_slavestatus.sh -H dbhost -P port -u dbuser -p dbpass -s connection -w integer -c integer
#########################################################################
help="\ncheck_mysql_slavestatus.sh (c) 2008-2014 GNU GPLv2 licence
Usage: check_mysql_slavestatus.sh -H host -P port -u username -p password [-s connection] [-w integer] [-c integer]\n
Options:\n-H Hostname or IP of slave server\n-P Port of slave server\n-u Username of DB-user\n-p Password of DB-user\n-s Connection name (optional, with multi-source replication)\n-w Delay in seconds for Warning status (optional)\n-c Delay in seconds for Critical status (optional)\n
Attention: The DB-user you type in must have CLIENT REPLICATION rights on the DB-server. Example:\n\tGRANT REPLICATION CLIENT on *.* TO 'nagios'#'%' IDENTIFIED BY 'secret';"
STATE_OK=0 # define the exit code if status is OK
STATE_WARNING=1 # define the exit code if status is Warning (not really used)
STATE_CRITICAL=2 # define the exit code if status is Critical
STATE_UNKNOWN=3 # define the exit code if status is Unknown
export PATH=$PATH:/usr/local/bin:/usr/bin:/bin # Set path
crit="No" # what is the answer of MySQL Slave_SQL_Running for a Critical status?
ok="Yes" # what is the answer of MySQL Slave_SQL_Running for an OK status?
for cmd in mysql awk grep [
do
if ! `which ${cmd} &>/dev/null`
then
echo "UNKNOWN: This script requires the command '${cmd}' but it does not exist; please check if command exists and PATH is correct"
exit ${STATE_UNKNOWN}
fi
done
# Check for people who need help - aren't we all nice ;-)
#########################################################################
if [ "${1}" = "--help" -o "${#}" = "0" ];
then
echo -e "${help}";
exit 1;
fi
# Important given variables for the DB-Connect
#########################################################################
while getopts "H:P:u:p:s:w:c:h" Input;
do
case ${Input} in
H) host=${OPTARG};;
P) port=${OPTARG};;
u) user=${OPTARG};;
p) password=${OPTARG};;
s) connection=\"${OPTARG}\";;
w) warn_delay=${OPTARG};;
c) crit_delay=${OPTARG};;
h) echo -e "${help}"; exit 1;;
\?) echo "Wrong option given. Please use options -H for host, -P for port, -u for user and -p for password"
exit 1
;;
esac
done
# Connect to the DB server and check for informations
#########################################################################
# Check whether all required arguments were passed in
if [ -z "${host}" -o -z "${port}" -o -z "${user}" -o -z "${password}" ];then
echo -e "${help}"
exit ${STATE_UNKNOWN}
fi
# Connect to the DB server and store output in vars
ConnectionResult=`mysql -h ${host} -P ${port} -u ${user} --password=${password} -e "show slave ${connection} status\G" 2>&1`
if [ -z "`echo "${ConnectionResult}" |grep Slave_IO_State`" ]; then
echo -e "CRITICAL: Unable to connect to server ${host}:${port} with username '${user}' and given password"
exit ${STATE_CRITICAL}
fi
check=`echo "${ConnectionResult}" |grep Slave_SQL_Running: | awk '{print $2}'`
checkio=`echo "${ConnectionResult}" |grep Slave_IO_Running: | awk '{print $2}'`
masterinfo=`echo "${ConnectionResult}" |grep Master_Host: | awk '{print $2}'`
delayinfo=`echo "${ConnectionResult}" |grep Seconds_Behind_Master: | awk '{print $2}'`
# Output of different exit states
#########################################################################
if [ ${check} = "NULL" ]; then
echo "CRITICAL: Slave_SQL_Running is answering NULL"; exit ${STATE_CRITICAL};
fi
if [ ${check} = ${crit} ]; then
echo "CRITICAL: ${host}:${port} Slave_SQL_Running: ${check}"; exit ${STATE_CRITICAL};
fi
if [ ${checkio} = ${crit} ]; then
# Checking local node replication role
# LOCAL_NODE=`hostname`
ROLE=`mysql -h $host -u slave_user -p'ZAQ!2wsx' --execute="SHOW master STATUS\G;" | grep Binlog_Do_DB | cut -d ' ' -f 6`
if [[ -n "$ROLE" ]];
then
echo "OK: This node is Master"; exit ${STATE_OK};
else
echo "CRITICAL: ${host} Slave_IO_Running: ${checkio}"; exit ${STATE_CRITICAL};
fi
fi
if [ ${checkio} = "Connecting" ]; then
echo "CRITICAL: ${host} Slave_IO_Running: ${checkio}"; exit ${STATE_CRITICAL};
fi
if [ ${check} = ${ok} ] && [ ${checkio} = ${ok} ]; then
# Delay thresholds are set
if [[ -n ${warn_delay} ]] && [[ -n ${crit_delay} ]]; then
if ! [[ ${warn_delay} -gt 0 ]]; then echo "Warning threshold must be a valid integer greater than 0"; exit $STATE_UNKNOWN; fi
if ! [[ ${crit_delay} -gt 0 ]]; then echo "Warning threshold must be a valid integer greater than 0"; exit $STATE_UNKNOWN; fi
if [[ -z ${warn_delay} ]] || [[ -z ${crit_delay} ]]; then echo "Both warning and critical thresholds must be set"; exit $STATE_UNKNOWN; fi
if [[ ${warn_delay} -gt ${crit_delay} ]]; then echo "Warning threshold cannot be greater than critical"; exit $STATE_UNKNOWN; fi
if [[ ${delayinfo} -ge ${crit_delay} ]]
then echo "CRITICAL: Slave is ${delayinfo} seconds behind Master | delay=${delayinfo}s"; exit ${STATE_CRITICAL}
elif [[ ${delayinfo} -ge ${warn_delay} ]]
then echo "WARNING: Slave is ${delayinfo} seconds behind Master | delay=${delayinfo}s"; exit ${STATE_WARNING}
else echo "OK: Slave SQL running: ${check} Slave IO running: ${checkio} / master: ${masterinfo} / slave is ${delayinfo} seconds behind master | delay=${delayinfo}s"; exit ${STATE_OK};
fi
else
# Without delay thresholds
echo "OK: Slave SQL running: ${check} Slave IO running: ${checkio} / master: ${masterinfo} / slave is ${delayinfo} seconds behind master | delay=${delayinfo}s"
exit ${STATE_OK};
fi
fi
echo "UNKNOWN: should never reach this part (Slave_SQL_Running is ${check}, Slave_IO_Running is ${checkio})"
exit ${STATE_UNKNOWN}

Issues installing IDAS on CentOS 7 VM through provided RPMs

I've been trying to install IDAS GE in a CentOS 7 VM on my machine through the UL2.0 RPMs(download link!) available in its catalogue page.
I followed the instructions on github, but I get stuck in starting the IoT as per section 3 of the Deployment section of the instructions. If I execute the init_iotagent.sh, where I inserted the local IP of the VM, I get the error:
log4cplus:ERROR No appenders could be found for logger (main).
log4cplus:ERROR Please initialize the log4cplus system properly.
HTTPFilter DESTRUCTOR 0
HTTPFilter DESTRUCTOR 0
Also, in the instructions for Starting IoTAgent as a Service, it's said that:
After installing iot-agent-base RPM an init.d script can be found in
this folder /usr/local/iot/init.d .
But this file is not there, leading me to believe that the IoTAgent wasn't installed properly from the RPMs provided.
Also, I can't find log files regarding IoTAgent, only the MongoDB has its log file at /usr/local/iot/mongodb-linux-x86_64-2.6.9/log/mongoc.log.
If anyone could help, it would be apreciated. Also, if more info is needed, please let me know.
Thank you
I recommend you to get the GitHub repository and build the RPMs from the source and then install it in your CentOS. Like is explained in the documentation:
NOTE: I changed the BUILD_TYPE to Release, so I created the Release dir.
GIT_VERSION and GIT_COMMIT are not the latest ones.
git clone https://github.com/telefonicaid/fiware-IoTAgent-Cplusplus.git
cd fiware....
mkdir -p build/Release
cd build/Release
cmake -DGIT_VERSION=20527 -DGIT_COMMIT=217023407f25ed258043cfc00a46b6c05fb0b52c -DMQTT=ON -DCMAKE_BUILD_TYPE=Release ../../
make install
make package
The packages will be in pack/Linux/RPM/
rpm -i iot-agent-base-xxxxxxx (xxxxxxx will be the numbers of the build)
rpm -i iot-agent-ul-xxxxxx (xxxxxxx will be the numbers of the build)
Once installed with RPMs the init.d file is in: /usr/local/iot/init.d/iotagent
This is the content of the file:
#!/bin/bash
# Copyright 2015 Telefonica Investigación y Desarrollo, S.A.U
#
# This file is part of fiware-IoTagent-Cplusplus (FI-WARE project).
#
# iotagent Start/Stop iotagent
#
# chkconfig: 2345 99 60
# description: iotagent
. /etc/rc.d/init.d/functions
PARAM=$1
INSTANCE=$2
USERNAME=iotagent
EXECUTABLE=/usr/local/iot/bin/iotagent
CONFIG_PATH=/usr/local/iot/config
iotagent_start()
{
local result=0
local instance=${1}
if [[ ! -x ${EXECUTABLE} ]]; then
printf "%s\n" "Fail - missing ${EXECUTABLE} executable"
exit 1
fi
if [[ -z ${instance} ]]; then
list_instances="${CONFIG_PATH}/iotagent_*.conf"
else
list_instances="${CONFIG_PATH}/iotagent_${instance}.conf"
fi
for instance_config in ${list_instances}
do
local NAME
NAME=${instance_config%.conf}
NAME=${NAME#*iotagent_}
source ${instance_config}
local IOTAGENT_PID_FILE="/var/run/iot/iotagent_${NAME}.pid"
printf "Starting iotagent ${NAME}..."
status -p ${IOTAGENT_PID_FILE} ${EXECUTABLE} &> /dev/null
if [[ ${?} -eq 0 ]]; then
printf "%s\n" " Already running, skipping $(success)"
continue
fi
# Load the environment
set -a
source ${instance_config}
# Mandatory parameters
IOTAGENT_OPTS=" ${IS_MANAGER} \
-n ${IOTAGENT_SERVER_NAME} \
-v ${IOTAGENT_LOG_LEVEL} \
-i ${IOTAGENT_SERVER_ADDRESS} \
-p ${IOTAGENT_SERVER_PORT} \
-d ${IOTAGENT_LIBRARY_DIR} \
-c ${IOTAGENT_CONFIG_FILE}"
su ${USERNAME} -c "LD_LIBRARY_PATH=\"${IOTAGENT_LIBRARY_DIR}\" \
${EXECUTABLE} ${IOTAGENT_OPTS} & echo \$! > ${IOTAGENT_PID_FILE}" &> /dev/null
sleep 2 # wait some time to leave iotagent start
local PID=$(cat ${IOTAGENT_PID_FILE})
local var_pid=$(ps -ef | grep ${PID} | grep -v grep)
if [[ -z "${var_pid}" ]]; then
printf "%s" "pidfile not found"
printf "%s\n" "$(failure)"
exit 1
else
printf "%s\n" "$(success)"
fi
done
return ${result}
}
iotagent_stop()
{
local result=0
local iotagent_instance=${1}
if [[ -z ${iotagent_instance} ]]; then
list_run_instances="/var/run/iot/iotagent_*.pid"
else
list_run_instances="/var/run/iot/iotagent_${iotagent_instance}.pid"
fi
if [[ $(ls -l ${list_run_instances} 2> /dev/null | wc -l) -eq 0 ]]; then
printf "%s\n" "There aren't any instance of IoTAgent ${iotagent_instance} running $(success)"
return 0
fi
for run_instance in ${list_run_instances}
do
local NAME
NAME=${run_instance%.pid}
NAME=${NAME#*iotagent_}
printf "%s" "Stopping IoTAgent ${NAME}..."
local RUN_PID=$(cat ${run_instance})
kill ${RUN_PID} &> /dev/null
local KILLED_PID=$(ps -ef | grep ${RUN_PID} | grep -v grep | awk '{print $2}')
if [[ -z ${KILLED_PID} ]]; then
printf "%s\n" "$(success)"
else
printf "%s\n" "$(failure)"
result=$((${result}+1))
fi
rm -f ${run_instance} &> /dev/null
done
return ${result}
}
iotagent_status()
{
local result=0
local iotagent_instance=${1}
if [[ -z ${iotagent_instance} ]]; then
list_run_instances="/var/run/iot/iotagent_*.pid"
else
list_run_instances="/var/run/iot/iotagent_${iotagent_instance}.pid"
fi
if [[ $(ls -l ${list_run_instances} 2> /dev/null | wc -l) -eq 0 ]]; then
printf "%s\n" "There aren't any instance of IoTAgent ${iotagent_instance} running."
return 1
fi
for run_instance in ${list_run_instances}
do
local NAME
NAME=${run_instance%.pid}
NAME=${NAME#*iotagent_}
printf "%s\n" "IoTAgent ${NAME} status..."
status -p ${run_instance} ${NODE_EXEC}
result=$((${result}+${?}))
done
return ${result}
}
case ${PARAM} in
'start')
iotagent_start ${INSTANCE}
;;
'stop')
iotagent_stop ${INSTANCE}
;;
'restart')
iotagent_stop ${INSTANCE}
iotagent_start ${INSTANCE}
;;
'status')
iotagent_status ${INSTANCE}
;;
esac
And the logs file are in /tmp/ :
IoTAgent-IoTPlatform.log
IoTAgent.log
IoTAgent-Manager.log
Hope this helps you.

pacemaker can't start my zabbix service when I stop zabbix service

I want use corosync+pacemaker+zabbix to achieve high availability. Follow is my config
crm(live)configure# show
node zabbix1 \
attributes standby="off" timeout="60"
node zabbix2 \
attributes standby="off"
primitive httpd lsb:httpd \
op monitor interval="10s"
primitive vip ocf:heartbeat:IPaddr \
params ip="192.168.56.110" nic="eth0" cidr_netmask="24" \
op monitor interval="10s"
primitive zabbix-ha lsb:zabbix_server \
op monitor interval="30s" timeout="20s" \
op start interval="0s" timeout="40s" \
op stop interval="0s" timeout="60s"
group webservice vip httpd zabbix-ha
property $id="cib-bootstrap-options" \
dc-version="1.1.8-7.el6-394e906" \
cluster-infrastructure="classic openais (with plugin)" \
expected-quorum-votes="2" \
stonith-enabled="false" \
last-lrm-refresh="1377489711" \
no-quorum-policy="ignore"
rsc_defaults $id="rsc-options" \
resource-stickiness="100"
and my crm_mon status is:
Last updated: Mon Aug 26 18:52:48 2013
Last change: Mon Aug 26 18:52:33 2013 via cibadmin on zabbix1
Stack: classic openais (with plugin)
Current DC: zabbix1 - partition with quorum
Version: 1.1.8-7.el6-394e906
2 Nodes configured, 2 expected votes
3 Resources configured.
Node zabbix1: online
httpd (lsb:httpd): Started
vip (ocf::heartbeat:IPaddr): Started
zabbix-ha (lsb:zabbix_server): Started
Node zabbix2: online
now i stop zabbix-ha service on the zabbix1, I wait for 300s, pacemaker can't start my zabbix-ha service:
[root#zabbix1 tmp]# ps -ef|grep zabbix
root 13287 31252 0 18:59 pts/2 00:00:00 grep zabbix
and my zabbix-ha script is
i can use crm resource stop/start zabbix-ha to stop/start my zabbix-ha.
I'm not use zabbix default script(address is zabbix-2.0.6/misc/init.d/fedora/core/zabbix_serve),I create lsb script by myself.Follow is my script for zabbix_server(i put it in the /etc/init.d)
#!/bin/bash
#
# zabbix: Control the zabbix Daemon
#
# author: Denglei
#
# blog: http://dl528888.blog.51cto.com/
# description: This is a init.d script for zabbix. Tested on CentOS6. \
# Change DAEMON and PIDFILE if neccessary.
#
#Location of zabbix binary. Change path as neccessary
DAEMON=/usr/local/zabbix/sbin/zabbix_server
NAME=`basename $DAEMON`
#Pid file of zabbix, should be matched with pid directive in nginx config file.
PIDFILE=/tmp/$NAME.pid
#this file location
SCRIPTNAME=/etc/init.d/$NAME
#only run if binary can be found
test -x $DAEMON || exit 0
RETVAL=0
start() {
echo $"Starting $NAME"
$DAEMON
RETVAL=0
}
stop() {
echo $"Graceful stopping $NAME"
[ -s "$PIDFILE" ] && kill -QUIT `cat $PIDFILE`
RETVAL=0
}
forcestop() {
echo $"Quick stopping $NAME"
[ -s "$PIDFILE" ] && kill -TERM `cat $PIDFILE`
RETVAL=$?
}
reload() {
echo $"Graceful reloading $NAME configuration"
[ -s "$PIDFILE" ] && kill -HUP `cat $PIDFILE`
RETVAL=$?
}
status() {
if [ -s $PIDFILE ]; then
echo $"$NAME is running."
RETVAL=0
else
echo $"$NAME stopped."
RETVAL=3
fi
}
# See how we were called.
case "$1" in
start)
start
;;
stop)
stop
;;
force-stop)
forcestop
;;
restart)
stop
start
;;
reload)
reload
;;
status)
status
;;
*)
echo $"Usage: $0 {start|stop|force-stop|restart|reload|status}"
exit 1
esac
exit $RETVAL
</pre>

Better script to restart mysql on Ubuntu 8.04

When I say sudo /etc/init.d/mysql restart on Ubuntu 8.04.2 sometimes there remains a mysql_safe process eating 99% of cpu. Making the machine practically unusable.
Is there a better way to restart mysql? I thought about writing a script:
sudo /etc/init.d/mysql stop
sleep 10
sudo killall mysql_safe
sudo /etc/init.d/mysql start
But this would be a evil workaround. (And the script is just a quick shot)
I googled and found that mysql_safe is a wrapper script which starts mysqld, and makes sure it gets restarted if it should die. So there should be a better way to restart the thing.
I googled that this is a common problem in this ubuntu version. Is Debian / Ubuntu doing it wrong at this point? The /etc/init.d script looks quite sophisticated, and it deals with mysql_safe also, but my skills are not good enough to understand it fully. But this would be the best place to improve. This is a paste of the version on my machine (which is untouched):
#!/bin/bash
#
### BEGIN INIT INFO
# Provides: mysql
# Required-Start: $remote_fs $syslog mysql-ndb
# Required-Stop: $remote_fs $syslog mysql-ndb
# Should-Start: $network $named $time
# Should-Stop: $network $named $time
# Default-Start: 2 3 4 5
# Default-Stop: 0 1 6
# Short-Description: Start and stop the mysql database server daemon
# Description: Controls the main MySQL database server daemon "mysqld"
# and its wrapper script "mysqld_safe".
### END INIT INFO
#
set -e
set -u
${DEBIAN_SCRIPT_DEBUG:+ set -v -x}
test -x /usr/sbin/mysqld || exit 0
. /lib/lsb/init-functions
SELF=$(cd $(dirname $0); pwd -P)/$(basename $0)
CONF=/etc/mysql/my.cnf
MYADMIN="/usr/bin/mysqladmin --defaults-file=/etc/mysql/debian.cnf"
# priority can be overriden and "-s" adds output to stderr
ERR_LOGGER="logger -p daemon.err -t /etc/init.d/mysql -i"
# Safeguard (relative paths, core dumps..)
cd /
umask 077
# mysqladmin likes to read /root/.my.cnf. This is usually not what I want
# as many admins e.g. only store a password without a username there and
# so break my scripts.
export HOME=/etc/mysql/
## Fetch a particular option from mysql's invocation.
#
# Usage: void mysqld_get_param option
mysqld_get_param() {
/usr/sbin/mysqld --print-defaults \
| tr " " "\n" \
| grep -- "--$1" \
| tail -n 1 \
| cut -d= -f2
}
## Do some sanity checks before even trying to start mysqld.
sanity_checks() {
# check for config file
if [ ! -r /etc/mysql/my.cnf ]; then
log_warning_msg "$0: WARNING: /etc/mysql/my.cnf cannot be read. See README.Debian.gz"
echo "WARNING: /etc/mysql/my.cnf cannot be read. See README.Debian.gz" | $ERR_LOGGER
fi
# check for diskspace shortage
datadir=`mysqld_get_param datadir`
if LC_ALL=C BLOCKSIZE= df --portability $datadir/. | tail -n 1 | awk '{ exit ($4>4096) }'; then
log_failure_msg "$0: ERROR: The partition with $datadir is too full!"
echo "ERROR: The partition with $datadir is too full!" | $ERR_LOGGER
exit 1
fi
}
## Checks if there is a server running and if so if it is accessible.
#
# check_alive insists on a pingable server
# check_dead also fails if there is a lost mysqld in the process list
#
# Usage: boolean mysqld_status [check_alive|check_dead] [warn|nowarn]
mysqld_status () {
ping_output=`$MYADMIN ping 2>&1`; ping_alive=$(( ! $? ))
ps_alive=0
pidfile=`mysqld_get_param pid-file`
if [ -f "$pidfile" ] && ps `cat $pidfile` >/dev/null 2>&1; then ps_alive=1; fi
if [ "$1" = "check_alive" -a $ping_alive = 1 ] ||
[ "$1" = "check_dead" -a $ping_alive = 0 -a $ps_alive = 0 ]; then
return 0 # EXIT_SUCCESS
else
if [ "$2" = "warn" ]; then
echo -e "$ps_alive processes alive and '$MYADMIN ping' resulted in\n$ping_output\n" | $ERR_LOGGER -p daemon.debug
fi
return 1 # EXIT_FAILURE
fi
}
#
# main()
#
case "${1:-''}" in
'start')
sanity_checks;
# Start daemon
log_daemon_msg "Starting MySQL database server" "mysqld"
if mysqld_status check_alive nowarn; then
log_progress_msg "already running"
log_end_msg 0
else
/usr/bin/mysqld_safe > /dev/null 2>&1 &
# 6s was reported in #352070 to be too few when using ndbcluster
for i in 1 2 3 4 5 6 7 8 9 10 11 12 13 14; do
sleep 1
if mysqld_status check_alive nowarn ; then break; fi
log_progress_msg "."
done
if mysqld_status check_alive warn; then
log_end_msg 0
# Now start mysqlcheck or whatever the admin wants.
output=$(/etc/mysql/debian-start)
[ -n "$output" ] && log_action_msg "$output"
else
log_end_msg 1
log_failure_msg "Please take a look at the syslog"
fi
fi
# Some warnings
if $MYADMIN variables | egrep -q have_bdb.*YES; then
echo "BerkeleyDB is obsolete, see /usr/share/doc/mysql-server-5.0/README.Debian.gz" | $ERR_LOGGER -p daemon.info
fi
if [ -f /etc/mysql/debian-log-rotate.conf ]; then
echo "/etc/mysql/debian-log-rotate.conf is obsolete, see /usr/share/doc/mysql-server-5.0/NEWS.Debian.gz" | $ERR_L
fi
;;
'stop')
# * As a passwordless mysqladmin (e.g. via ~/.my.cnf) must be possible
# at least for cron, we can rely on it here, too. (although we have
# to specify it explicit as e.g. sudo environments points to the normal
# users home and not /root)
log_daemon_msg "Stopping MySQL database server" "mysqld"
if ! mysqld_status check_dead nowarn; then
set +e
shutdown_out=`$MYADMIN shutdown 2>&1`; r=$?
set -e
if [ "$r" -ne 0 ]; then
log_end_msg 1
[ "$VERBOSE" != "no" ] && log_failure_msg "Error: $shutdown_out"
log_daemon_msg "Killing MySQL database server by signal" "mysqld"
killall -15 mysqld
server_down=
for i in 1 2 3 4 5 6 7 8 9 10; do
sleep 1
if mysqld_status check_dead nowarn; then server_down=1; break; fi
done
if test -z "$server_down"; then killall -9 mysqld; fi
fi
fi
if ! mysqld_status check_dead warn; then
log_end_msg 1
log_failure_msg "Please stop MySQL manually and read /usr/share/doc/mysql-server-5.0/README.Debian.gz!"
exit -1
else
log_end_msg 0
fi
;;
'restart')
set +e; $SELF stop; set -e
$SELF start
;;
'reload'|'force-reload')
log_daemon_msg "Reloading MySQL database server" "mysqld"
$MYADMIN reload
log_end_msg 0
;;
'status')
if mysqld_status check_alive nowarn; then
log_action_msg "$($MYADMIN version)"
else
log_action_msg "MySQL is stopped."
exit 3
fi
;;
*)
echo "Usage: $SELF start|stop|restart|reload|force-reload|status"
exit 1
;;
esac
I found many hints, but I would like this resolved to a certain degree of reliability for production servers.
Edit: It seems to be exactly this unsolved bug.
Maybe it is this bug from the MySQL site.
This seems related or identical.
Some people talk of a race condition with 2 instances of mysql_safe. Others suggest commentiong out the sanity check in the startup script.
I would try to figure out what is causing the CPU issue, rather than investigate how to re-write the startup script. The startup script is fairly standard and should work well in a production environment.