Cannot access hawkular metrics - openshift

I have install openshift origin 3.9 using inventory file like you. I have used below line for metrics installation:
openshift_metrics_install_metrics=true
openshift_metrics_hawkuler_hostname=hawkular-metrics.example.com
openshift_master_metrics_public_url=https://hawkular-metrics.example.com/hawkular/metrics
And I installed using inventory file with prerequisites.yaml and then deploy_cluster.yaml, so that hawker-cassandra, hawkular-metrics and heapster in running condition and oc adm top node command is working.
But problem is cannot access hawkular metrics by below command:
curl -H "Athorization :Bearer XXXXX" -H "Hawkular-Tenant:openshift-infra" -X GET https://hawkular-metrics.example.com/hawkular/metrics/metics
Showing error:
could not resolve host:hawkular-metrics.example.com,unknown error.
To deploy metrics and access metrics should I need any thing extra for this version?

Is hawkular-metrics.example.com/hawkular/metrics/metics accessible from client? Either a DNS Server or a local /etc/hosts entry should resolve hawkular-metrics.example.com to the node where router pod resides.

Related

Nameserver limits were exceeded while installing k3s

I want to install Rancher in a Ubuntu server (v18.04). I am following this doc and I want to install k3s. When I install using the command
curl -sfL https://get.k3s.io | sh
everything works.
But when I try to install it using a MySQL database as the doc says:
curl -sfL https://get.k3s.io | sh -s - server \
--datastore-endpoint="mysql://root:mypassword#tcp(localhost:3306)/dbk3s"
I received the following error:
Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is
MySQL server is running with no problem at all and I already created dbk3s database. It look likes the problem is related with the fact that Ubuntu is limited with just 3 DNS nameserver records and 6 DNS search records but I am not sure.

minishift - Monitoring pods

As per the documentation, monitoring is shipped with OKD.
OKD ships with a pre-configured and self-updating monitoring stack that is based on the Prometheus open source project and its wider eco-system. It provides monitoring of cluster components and ships with a set of alerts to immediately notify the cluster administrator about any occurring problems and a set of Grafana dashboards.
Further, as per the documentation, this command should show links for various monitoring tools. oc -n openshift-monitoring get routes
When I run the oc command with system user, I get a message as: No resources found.
The installation does not go through.
git clone https://github.com/openshift/cluster-monitoring-operator
cd cluster-monitoring-operator
oc apply -f manifests/
Error messages:
namespace "openshift-monitoring" created
serviceaccount "cluster-monitoring-operator" created
unable to decode "manifests/0000_50_cluster_monitoring_operator_02-role.yaml": no kind "ClusterRole" is registered for version "rbac.authorization.k8s.io/v1beta1"
unable to decode "manifests/0000_50_cluster_monitoring_operator_03-role-binding.yaml": no kind "ClusterRoleBinding" is registered for version "rbac.authorization.k8s.io/v1beta1"
unable to decode "manifests/0000_50_cluster_monitoring_operator_04-deployment.yaml": no kind "Deployment" is registered for version "apps/v1"
unable to decode "manifests/0000_50_cluster_monitoring_operator_05-clusteroperator.yaml": no kind "ClusterOperator" is registered for version "config.openshift.io/v1"
unable to decode "manifests/0000_90_cluster_monitoring_operator_00-operatorgroup.yaml": no kind "OperatorGroup" is registered for version "operators.coreos.com/v1"
So, how do we enable monitoring with minishift?
You can follow this to install prometheus in minishift:
https://github.com/minishift/minishift-addons/tree/master/add-ons/prometheus
Be sure that you login as admin. If you encounter problem to login as admin, you can follow these steps:
minishift ssh
[docker#example ~]$ sudo su
[root#example ~]# export KUBECONFIG=/var/lib/minishift/base/openshift-apiserver/admin.kubeconfig PATH="$PATH:/var/lib/minishift/bin"
[root#example ~]# oc adm policy add-cluster-role-to-user cluster-admin admin
[root#example ~]# exit
[docker#example ~]$ exit
oc login -u admin -p admin
oc whoami
You will see you login as admin.
When I enter the command to apply the prometheus, I encountered this problem:
minishift addons apply prometheus --addon-env namespace=kube-system
-- Applying addon 'prometheus':.Error applying the add-on: Error executing command 'oc new-app -f prometheus.yaml -p NAMESPACE=#{namespace} -n #{namespace}'.
Solution:
login Minishift as admin using "oc login -u admin -p admin".
go to the namespace "kube-system" by "oc project kube-system".
click on "Add to project" -> "import YAML/JSON".
clone the prometheus addon in your local machine from https://github.com/minishift/minishift-addons.git
import the ../minishift-addons/add-ons/prometheus/prometheus.yml into the "kube-system" namespace.
Afterwards, the prometheus will be deployed.
You can access the prometheus graph UI: https://prometheus-kube-system.$minishift-host-ip-address.nip.io.

Hadoop - No route to host while configuring HUE

I have installed hue on my local ubuntu system and installed hadoop muti cluster system on two system.
Hadoop Version : 2.7.3
Hue Version : 3.12.0
Ozzie Version : 4.3.0
I am facing issue when I am running sqoop job process from mysql to import data from HDFS. I am getting following error.
Caused by: java.net.NoRouteToHostException: No Route to Host from Developer4/127.0.0.1 to cm:10020 failed on socket timeout exception: java.net.NoRouteToHostException: No route to host; For more details see: http://wiki.apache.org/hadoop/NoRouteToHost
HDFS url hdfs://master:9000
My /etc/hosts file like
192.168.1.149 master
127.0.0.1 developer4
192.168.1.161 slave
Please suggest me where I am doing wrong. Even ozzie command for start and stop command work properly on command line.
You have set Hadoop in your localhost system then you need to remove or modified below things in core-site.xml file.
mapreduce.jobhistory.address 0.0.0.0:10020 Host and port for Job History Server (default 0.0.0.0:10020)
After that you need to run jobhistoryservice with below command.
sbin/mr-jobhistory-daemon.sh --config /home/developer4/hadoop-2.7.3/etc start historyserver
After this command port is enable on your localhost and hope this will help you.

Unable to mount a directory on Google Compute Engine using sshfs

I am trying to mount a remote filesystem on Google Container Engine. I am following this tutorial: https://www.digitalocean.com/community/tutorials/how-to-use-sshfs-to-mount-remote-file-systems-over-ssh
Using following sshfs command:
sudo sshfs -o sshfs_debug,allow_other <instance-name>.<region>.<project_id>:/home/<user_name> /mnt/gce-container
I am getting error:
SSHFS version 2.5
read: Connection reset by peer
I referred this link https://cloud.google.com/sdk/gcloud/reference/compute/config-ssh
and could login using ssh via following command:
$gcloud compute config-ssh
$ssh <instance-name>.<region>.<project_id>
Any ideas what might be going wrong here? I can't understand what keys and username should I use for sshfs login.
Update(11/5):
I am using following command:
sshfs -o IdentityFile=~/.ssh/google_compute_engine <user>#<ip>:~/ /mnt/gce`
I have chowned /mnt/gce folder for my user. I checked the IP matches the entry in ~/.ssh/config file. However I still get the error read: Connection reset by peer
The problem with command below is that
1) unless you have a static IP, it keeps changing on machine reboot
2) you need to use .pub file
sshfs -o IdentityFile=~/.ssh/google_compute_engine <user>#<ip>:~/ /mnt/gce
I finally got it working by following command:
sudo mkdir /mnt/gce
sudo chown <user> /mnt/gce
sshfs -o IdentityFile=~/.ssh/google_compute_engine.pub <user_name>#<instance-name>.<region>.<project_id>:/home/<user_name> /mnt/gce
A few things that might be the cause of the problem:
Don't use sshfs as root. It's a FUSE filesystem and meant to be user mounted.
Don't specify a full path as the remote FS. It's SSH, so by default, the $PWD on the remote side is the login user's $HOME.
if ssh works, sshfs will work. The easiest way is to make sure that ~/.ssh/config has an entry for the remote host with the user, port, etc provided.
If you get this from sshfs
read: Connection reset by peer
maybe help to set file to read only
chmod 400 /{{path_to_your_key}}/keypair.pem
and connect again.

how to setup and configure mysql-proxy on ubuntu on amazon ec2

i am trying to setup mysql-proxy on ubuntu on amazon ec2
i have done following:
sudo apt-get install mysql-proxy --yes
vi /etc/default/mysql-proxy
i put following content on "/etc/default/mysql-proxy"
ENABLED="true"
OPTIONS="--proxy-lua-script=/usr/share/mysql-proxy/rw-splitting.lua
--proxy-address=127.0.0.1:3306
--proxy-backend-addresses=private_ip_of_another_ec2_db_server:3306,private_ip_of_another_ec2_db_server:3306"
also tied with "--proxy-address=private_ip_or_public_ip_of_proxy-server:3306 or 4040"
and "--proxy-backend-addresses=public_ip_of_another_ec2_db_server:3306,public_ip_of_another_ec2_db_server:3306"
after that i tried to connect proxy server from another pc using mysql like:
mysql -u some_user -pxxxxx -h proxy_server_ip
or
mysql -u some_user -pxxxxx -h proxy_server_ip -P 4040
but its not working
its showing error:
ERROR 2003 (HY000): Can't connect to MySQL server on 'ip' (10061)
i want to tell you can connect the db server remotely where i allowed remote connection to any host
i also tried /etc/init.d/mysql-proxy start or /etc/init.d/mysql-proxy restart but no result
just to inform you that /etc/init.d/mysql-proxy stop is showing failed
can anyone please help me to setup and configure mysql-proxy on ubuntu
===
Edit
i found some help from other question of stackoverflow and also according to a suggestion in the comments, have done following procedure. and it seems its working now.
i installed mysql-client and mysql-server locally(on proxy server)
then i tried to run mysql-proxy using following command:
mysql-proxy --proxy-backend-addresses=10.73.151.244:3306 --proxy-backend-addresses=10.73.198.7:3306 --proxy-address=:4040 --admin-username=root --admin-password=root --admin-lua-script=>/usr/lib/mysql-proxy/lua/admin.lua
then i tried to connect remotely to the proxy server and its working.
but it seems i need to run this command under screen because when i close the terminal proxy stops working.
Can you please tell me that do i need to run this command under screen or is there any other way to make it alive all time?
There is no need to install Mysql client or Mysql Server on your mysql-proxy.
Installing mysql-proxy does have "full daemon capabilities" compiled into it.
If your are running Ubuntu Server, you may wish to use an UPSTART service script.
This script can be copied into /etc/init/mysql-proxy.conf
# mysql-proxy.conf (Ubuntu 14.04.1) Upstart proxy configuration file for AWS RDS
# mysql-proxy - mysql-proxy job file
description "mysql-proxy upstart script"
author "shadowbq <shadowbq#gmail.com>"
# Stanzas
#
# Stanzas control when and how a process is started and stopped
# See a list of stanzas here: http://upstart.ubuntu.com/wiki/Stanzas#respawn
# When to start the service
start on runlevel [2345]
# When to stop the service
stop on runlevel [016]
# Automatically restart process if crashed
respawn
# Essentially lets upstart know the process will detach itself to the background
expect daemon
# Run before process
pre-start script
[ -d /var/run/mysql-proxy ] || mkdir -p /var/run/mysql-proxy
echo "starting mysql-proxy"
end script
# Start the process
exec /usr/bin/mysql-proxy --plugins=proxy --proxy-lua-script=/usr/share/mysql-proxy/rw-splitting.lua --log-level=debug --proxy-backend-addresses=private_ip_of_another_ec2_db_server:3306,private_ip_of_another_ec2_db_server:3306 --daemon --log-use-syslog --pid-file=/var/run/mysql-proxy/mysql-proxy.pid
In the above example I hard coded the AWS RDS server into script, instead of fiddling with defaults and config file
Install Upgraded version 0.8.5
Note:
apt repo does not have 0.8.5 so we need to download tar from mysql official site
Prerequisite :-
Create file /etc/default/mysql-proxy with following content
ENABLED="true"
OPTIONS="--defaults-file=/etc/mysql/mysql-proxy.cnf"
Installation Procedure :-
Download mysql-proxy 0.8.x
Untar in /usr/local
Update PATH environment with /usr/local/mysql-proxy-0.8.5-linux-debian6.0-x86-64bit/bin
vim /etc/environment (to update environment path)
cd /usr/local/mysql-proxy-0.8.5-linux-debian6.0-x86-64bit/bin
Run command sudo ./mysql-proxy --defaults-file=/etc/mysql/mysql-proxy.cnf
Sample mysql-proxy.cnf file
[mysql-proxy]
log-level=debug
log-file=/var/log/mysql-proxy.log
pid-file = /var/run/mysql-proxy.pid
daemon = true
--no-proxy = false
admin-username=ADMIN
admin-password=ADMIN
proxy-backend-addresses=RDS-ENDPOINT:RDS-PORT
admin-lua-script=/usr/lib/mysql-proxy/lua/admin.lua
proxy-address=0.0.0.0:4040
admin-address=localhost:4041
change host ip and port of RDS or mysql
connect to Mysql server via proxy with
mysql -h{proxy-host-ip} -P 4040 -u{mysql_username} -p