My minishift version is v1.16.1+d9a86c9 and I'm running openshift origin 3.9.
I want to use a horizontal pod autoscaler in minishift and for that I need the metrics pods to be installed. I have searched the minishift docs but there's no info about how to install the hauwkular metrics.
Apparently minishift start --metrics used to work, but it's not a valid flag anymore.
It has been removed from Minishift indeed, see https://github.com/minishift/minishift/pull/2241 (and same for the command-line tool oc cluster up: https://github.com/openshift/origin/pull/19209 )
However you can still install Hawkular with some extra steps, using the ansible playbook: see https://docs.openshift.com/container-platform/3.9/install_config/cluster_metrics.html#deploying-the-metrics-components
You can use -extra-clusterup-flags "--metrics" with minishift now, to pass the flag to oc cluster up.
Related
I have installed Kafka on a local Minikube by using the Helm charts https://github.com/confluentinc/cp-helm-charts following these instructions https://docs.confluent.io/current/installation/installing_cp/cp-helm-charts/docs/index.html like so:
helm install -f kafka_config.yaml confluentinc/cp-helm-charts --name kafka-home-delivery --namespace cust360
The kafka_config.yaml is almost identical to the default yaml, with the one exception being that I scaled it down to 1 server/broker instead of 3 (just because I'm trying to conserve resources on my local minikube; hopefully that's not relevant to my problem).
Also running on Minikube is a MySQL instance. Here's the output of kubectl get pods --namespace myNamespace:
I want to connect MySQL and Kafka, using one of the connectors (like Debezium MySQL CDC, for instance). In the instructions, it says:
Install your connector
Use the Confluent Hub client to install this
connector with:
confluent-hub install debezium/debezium-connector-mysql:0.9.2
Sounds good, except 1) I don't know which pod to run this command on, 2) None of the pods seem to have a confluent-hub command available.
Questions:
Does confluent-hub not come installed via those Helm charts?
Do I have to install confluent-hub myself?
If so, which pod do I have to install it on?
Ideally this should be configurable as part of the helm script, but unfortunately it is not as of now. One way to work around this is to build a new Docker from Confluent's Kafka Connect Docker image. Download the connector manually and extract the contents into a folder. Copy the contents of this to a path in the container. Something like below.
Contents of Dockerfile
FROM confluentinc/cp-kafka-connect:5.2.1
COPY <connector-directory> /usr/share/java
/usr/share/java is the default location where Kafka Connect looks for plugins. You could also use different location and provide the new location (plugin.path) during your helm installation.
Build this image and host it somewhere accessible. You will also have to provide/override the image and tag details during the helm installation.
Here is the path to the values.yaml file. You can find the image and plugin.path values here.
Just an add-on to Jegan's comment above: https://stackoverflow.com/a/56049585/6002912
You can choose to do the Dockerfile below. Recommended.
FROM confluentinc/cp-server-connect-operator:5.4.0.0
RUN confluent-hub install --no-prompt debezium/debezium-connector-postgresql:1.0.0
Or you can use a Docker's multi-stage build instead.
FROM confluentinc/cp-server-connect-operator:5.4.0.0
COPY --from=debezium/connect:1.0 \
/kafka/connect/debezium-connector-postgres/ \
/usr/share/confluent-hub-components/debezium-connector-postgres/
This will help you to save time on getting the right jar files for your plugins like debezium-connector-postgres.
From Confluent documentation: https://docs.confluent.io/current/connect/managing/extending.html#create-a-docker-image-containing-c-hub-connectors
The Kafka Connect pod should already have the confluent-hub installed. It is that pod you should run the commands on.
The cp kafka connect pod has 2 containers, one of them is a cp-kafka-connect-server container.That container has confluent-hub installed.You can login into that container and run your connector commands there.To login into that container, run the following command:
kubectl exec -it {pod-name} -c cp-kafka-connect-server -- /bin/bash
As of latest version of chart, this can be achieved using customEnv.CUSTOM_SCRIPT_PATH
See README.md
Script can be passed as a secret and mounted as a volume
Openshift origin was installed via the ansible playbooks.
According to this documentation, the correct command to restart is:
$ systemctl restart atomic-openshift-master-api atomic-openshift-master-controllers
However, this just results in:
Failed to restart atomic-openshift-master-api.service: Unit not found.
Failed to restart atomic-openshift-master-controllers.service: Unit not found.
What is the correct way to restart openshift origin (okd) after installing via ansible on Centos7?
If you get the following error:
bash: master-restart: command not found
try:
/usr/local/bin/master-restart
If you installed the OKD as v3.10, you should restart master services as follows. [0] The service is running as pod from v3.10, so you should use the specific command for restarting the master services, such as api and controllers
# master-restart api
# master-restart controllers
[0] RESTARTING MASTER SERVICES
As far as I know, you have two alternatives:
Using ansible
Use the same inventory.ini as you used when installing OpenShift origin.
Assuming that you have the inventory.ini file and the openshift-ansible repository cloned under /home/user/, execute the master restart playbook:
ansible-playbook -i /home/user/inventory.ini /home/user/openshift-ansible/playbooks/openshift-master/restart.yml
Restart the services
To restart the services manually, the service names are origin-master-api and origin-master-controllers. Thus the command to restart them should be:
systemctl restart origin-master-api origin-master-controllers
I strongly recommend using the first option.
On my ubuntu laptop I was issuing some kubectl commands including running kubernetes from a local Docker container all was well ... at some point I then issued this command
kubectl config set-cluster test-doc --server=https://104.196.108.118
now my local kubectl fails to execute ... looks like the Server side needs to get reset back to default
kubectl version
Client Version: version.Info{Major:"1", Minor:"2", GitVersion:"v1.2.2", GitCommit:"528f879e7d3790ea4287687ef0ab3f2a01cc2718", GitTreeState:"clean"}
error: couldn't get version/kind; json parse error: json: cannot unmarshal string into Go value of type struct { APIVersion string "json:\"apiVersion,omitempty\""; Kind string "json:\"kind,omitempty\"" }
I deleted and reinstalled the gcloud SDK binaries and ran
mv ~/.config/gcloud ~/.config/gcloud~ignore
gcloud init
gcloud components update kubectl
How do I delete my local kubectl settings (on ubuntu 16.04) and start fresh ?
It's important to note that you've set a kubeconfig setting for your client. When you run kubectl version, you're getting the version for client and the server which in your case seems to be the issue with the version command.
Updating your config
You need to update the setting to the appropriate information. You can use the same command you used to set the server to change it to the correct server.
If you want to wipe the slate clean in terms of client config, you should remove the kubeconfig file(s). In my experience with the gcloud setup, this is just ~/.kube/config.
If you are running the cluster through google cloud engine, you can use gcloud to get the kubeconfig set for you as per the container engine quick start guide. The following assumes that you have defaults for the project, zone, and cluster set.
gcloud container clusters get-credentials CLUSTER_NAME
Removing kubectl - this isn't necessary
If your goal is to wholesale get rid of kubectl, you should remove the component rather than reseting gcloud.
gcloud components remove kubectl
But that won't solve your problem as it doesn't remove or reset ~/.kube/config when I run it on Mac and if you want to keep working with it, you'll need to reinstall kubectl.
Docker versions 1.6 and above use the Docker Registry V2 API however it is still liable to make requests looking for an old V1 registry. I think I saw there is a configuration option to make Docker avoid making any /v1/ requests.
I saw this option very recently but now I can't find it. I suspect it was in a page linked to by the Docker email that told us the Registry will stop supporting Docker versions prior to 1.6
I know Docker only looks for a V1 registry when it has no luck looking for /v2 but I want to stop it altogether. How can I stop Docker making requests to /v1/ registry URLs under any circumstances ?
--disable-legacy-registry
"prevents the docker daemon from pull, push, and login operations against v1 registries." -
Source
Following the guide I'm trying to manage Google Container Engine cluster from another machine on Google Compute Engine. Here is the output from my GCE instance:
oleksandr_berezianskyi_gmail_com#docker-managed-jenkins:~$ sudo gcloud components update preview
All components are up to date.
oleksandr_berezianskyi_gmail_com#docker-managed-jenkins:~$ sudo gcloud components update alpha
All components are up to date.
oleksandr_berezianskyi_gmail_com#docker-managed-jenkins:~$ gcloud alpha container kubectl create -f cassandra.yaml
ERROR: (gcloud.alpha.container.kubectl) This command requires the kubernetes client (kubectl), which is installed with the gcloud preview component. Run 'gcloud components update preview', or make sure kubectl is installed somewhere on your
path.
As you see my Google Cloud SDK seems to be up-to-date but still not working properly on GCE. Is there something I'm missing?
The correct way to install kubectl is now gcloud components install kubectl
You must have the Google Cloud SDK installed
For further information, Quick Start Guide
If you have run gcloud components update that will install the kubectl binary on your system, it just won't be in your path. It will be located in the cloud-sdk install directory. You can add it to your path manually by running
export PATH=$PATH:/usr/local/share/google/google-cloud-sdk/bin/
or you can create a symlink from a directory that is already in your path, like /usr/local/bin by running
sudo ln -s /usr/local/share/google/google-cloud-sdk/bin/kubectl /usr/local/bin/kubectl
You can download the current version of the kubectl binary from this Google Cloud Storage URL: https://storage.googleapis.com/kubernetes-release/release/v0.18.2/bin/linux/amd64/kubectl
As of (at least) 138.0.0 (Nov 2016)
It's now gcloud components install kubectl
This is when running:
Your current Cloud SDK version is: 138.0.0
Here are the related instructions:
To install or remove components at your current SDK version [138.0.0], run:
$ gcloud components install COMPONENT_ID
$ gcloud components remove COMPONENT_ID
To update your SDK installation to the latest version [141.0.0], run:
$ gcloud components update
System Version: macOS 13.1 (22C65)
Chip: Apple M1
ProductVersion: 13.1
export PATH=$PATH:/opt/homebrew/Caskroom/google-cloud-sdk/latest/google-cloud-sdk/bin/
helped me