Openshift origin was installed via the ansible playbooks.
According to this documentation, the correct command to restart is:
$ systemctl restart atomic-openshift-master-api atomic-openshift-master-controllers
However, this just results in:
Failed to restart atomic-openshift-master-api.service: Unit not found.
Failed to restart atomic-openshift-master-controllers.service: Unit not found.
What is the correct way to restart openshift origin (okd) after installing via ansible on Centos7?
If you get the following error:
bash: master-restart: command not found
try:
/usr/local/bin/master-restart
If you installed the OKD as v3.10, you should restart master services as follows. [0] The service is running as pod from v3.10, so you should use the specific command for restarting the master services, such as api and controllers
# master-restart api
# master-restart controllers
[0] RESTARTING MASTER SERVICES
As far as I know, you have two alternatives:
Using ansible
Use the same inventory.ini as you used when installing OpenShift origin.
Assuming that you have the inventory.ini file and the openshift-ansible repository cloned under /home/user/, execute the master restart playbook:
ansible-playbook -i /home/user/inventory.ini /home/user/openshift-ansible/playbooks/openshift-master/restart.yml
Restart the services
To restart the services manually, the service names are origin-master-api and origin-master-controllers. Thus the command to restart them should be:
systemctl restart origin-master-api origin-master-controllers
I strongly recommend using the first option.
Related
I'm having a very simple unit-file which comes with a service which I package to a RPM-file. This RPM-file is built and installed on Fedora 28.
My service-file could not be simpler:
[Unit]
Description=Hello Work
[Service]
ExecStart=/usr/bin/executable
[Install]
WantedBy=multi-user.target
In my spec file I added these sections:
%post
%systemd_post %{name}.service
%preun
%systemd_preun %{name}.service
%postun
%systemd_postun_with_restart %{name}.service
The service file is copied correctly via
mkdir -p %{buildroot}%{_unitdir}/
cp %{name}.service %{buildroot}%{_unitdir}/
in the %install-section.
When I install the package, the service is not started. When I run manually
systemctl enable <service-name>
it works.
Where is my mistake that the installation does not enable and start my service?
When looking at the documentation, it does not say that %systemd_post starts your service. They refer to systemd.preset: you need to define and package a %{name}.preset file that specifies that your service should be enabled by default.
This should enable your package by default.
I'm not entirely sure that now your package will be started by default, but it looks like it :)
I have installed Kafka on a local Minikube by using the Helm charts https://github.com/confluentinc/cp-helm-charts following these instructions https://docs.confluent.io/current/installation/installing_cp/cp-helm-charts/docs/index.html like so:
helm install -f kafka_config.yaml confluentinc/cp-helm-charts --name kafka-home-delivery --namespace cust360
The kafka_config.yaml is almost identical to the default yaml, with the one exception being that I scaled it down to 1 server/broker instead of 3 (just because I'm trying to conserve resources on my local minikube; hopefully that's not relevant to my problem).
Also running on Minikube is a MySQL instance. Here's the output of kubectl get pods --namespace myNamespace:
I want to connect MySQL and Kafka, using one of the connectors (like Debezium MySQL CDC, for instance). In the instructions, it says:
Install your connector
Use the Confluent Hub client to install this
connector with:
confluent-hub install debezium/debezium-connector-mysql:0.9.2
Sounds good, except 1) I don't know which pod to run this command on, 2) None of the pods seem to have a confluent-hub command available.
Questions:
Does confluent-hub not come installed via those Helm charts?
Do I have to install confluent-hub myself?
If so, which pod do I have to install it on?
Ideally this should be configurable as part of the helm script, but unfortunately it is not as of now. One way to work around this is to build a new Docker from Confluent's Kafka Connect Docker image. Download the connector manually and extract the contents into a folder. Copy the contents of this to a path in the container. Something like below.
Contents of Dockerfile
FROM confluentinc/cp-kafka-connect:5.2.1
COPY <connector-directory> /usr/share/java
/usr/share/java is the default location where Kafka Connect looks for plugins. You could also use different location and provide the new location (plugin.path) during your helm installation.
Build this image and host it somewhere accessible. You will also have to provide/override the image and tag details during the helm installation.
Here is the path to the values.yaml file. You can find the image and plugin.path values here.
Just an add-on to Jegan's comment above: https://stackoverflow.com/a/56049585/6002912
You can choose to do the Dockerfile below. Recommended.
FROM confluentinc/cp-server-connect-operator:5.4.0.0
RUN confluent-hub install --no-prompt debezium/debezium-connector-postgresql:1.0.0
Or you can use a Docker's multi-stage build instead.
FROM confluentinc/cp-server-connect-operator:5.4.0.0
COPY --from=debezium/connect:1.0 \
/kafka/connect/debezium-connector-postgres/ \
/usr/share/confluent-hub-components/debezium-connector-postgres/
This will help you to save time on getting the right jar files for your plugins like debezium-connector-postgres.
From Confluent documentation: https://docs.confluent.io/current/connect/managing/extending.html#create-a-docker-image-containing-c-hub-connectors
The Kafka Connect pod should already have the confluent-hub installed. It is that pod you should run the commands on.
The cp kafka connect pod has 2 containers, one of them is a cp-kafka-connect-server container.That container has confluent-hub installed.You can login into that container and run your connector commands there.To login into that container, run the following command:
kubectl exec -it {pod-name} -c cp-kafka-connect-server -- /bin/bash
As of latest version of chart, this can be achieved using customEnv.CUSTOM_SCRIPT_PATH
See README.md
Script can be passed as a secret and mounted as a volume
I'm getting an error while trying to connect raspberry running ubuntu mate to my Google Cloud SQL instance.
These are the step I did to install:
git clone https://github.com/GoogleCloudPlatform/cloudsql-proxy
cd cloudsql-proxy/
sudo sh download_proxy.sh
My instance is configured this way (I deleted some characters in the image and in the code):
I didn't set the network because I'll be using proxy
Then I download into the same folder my JSON key.
wget https://drive.google.com/file/d/my_key.json
And the start the proxy
sudo ./cloud_sql_proxy -instances=be - 21:us-central1:be =tcp:3306 \
-credential_file=./my_key.json &
But I'm getting the error:
pi#pi:~/cloudsql-proxy$ ./cloud_sql_proxy: 1: ./cloud_sql_proxy:
Syntax error: ")" unexpected
I've tried removing the .json and I was getting the same error before without credential, I think that the problem is in the setup.
My dir ls is:
Any help is appreciated :)
download_proxy.sh downloads the proxy compiled for the amd64 architecture of CPU (aka x86_64). Your raspberry Pi has a ARM CPU, so this binary cannot run on your machine.
Google does not provide pre-build ARM versions of the proxy. I don't even know if it is able to build on ARM CPU. If it is possible, this is how you must do it:
Install go, e.g. with apt-get install golang
Setup a GOPATH, as per https://github.com/golang/go/wiki/GOPATH
Run go get github.com/GoogleCloudPlatform/cloudsql-proxy/cmd/cloud_sql_proxy
Run the proxy with $GOPATH/cloud_sql_proxy -instances=...
Ok.
I'm sharing what I did to make it work, as David I don't know what version was I downloading.
I tried to avoid installing Go but it was the only way to get it installed.
sudo apt-get install golang-go
export GOPATH=$HOME/go
go get github.com/GoogleCloudPlatform/cloudsql-proxy/cmd/cloud_sql_proxy
cd $GOPATH/bin
wget your_key.json
sudo ./cloud_sql_proxy -instances=the_full_name_of_the_instance=tcp:3306 -credential_file=./your_key.json &
But I was getting a error because I already have mysql running localy in the same port
So now I'm using a unix soquet
sudo ./cloud_sql_proxy -instances=the_full_name_of_the_instance -credential_file=./your_key.json &
And then it's ready for connections :)
Thanks guys
I found issues with this when compiling SQL-proxy. I did, however, find the instructions here worked great on my raspberry pi 3. Have to make sure to remove all prior installations then reinstall it
wget https://storage.googleapis.com/golang/go1.9.linux-armv6l.tar.gz
sudo tar -C /usr/local -xzf go1.9.linux-armv6l.tar.gz
export PATH=$PATH:/usr/local/go/bin # put into ~/.profile`
I am trying to configure ejabberd with mysql database. After changes into the yml file i got this error while starting node.
2017-01-09 13:07:27.386 [critical] <0.38.0>#ejabberd:exit_or_halt:133 failed to start application 'p1_mysql': {error,
{"no such file or directory",
"p1_mysql.app"}}
Searching for solutions encountered with a step to hit './configure'
But i cannot locate the directory.
Tried it in /var/lib/ejabberd as well. says "not found"
Installed on Ubuntu using apt-get install ejabberd
How do i hit configure to do the needful.
If you're building from source, you should enable mysql support via configure script, e.g.
$ ./autogen.sh # Only needed if building directly from github repo
$ ./configure --enable-mysql
$ make
On my ubuntu laptop I was issuing some kubectl commands including running kubernetes from a local Docker container all was well ... at some point I then issued this command
kubectl config set-cluster test-doc --server=https://104.196.108.118
now my local kubectl fails to execute ... looks like the Server side needs to get reset back to default
kubectl version
Client Version: version.Info{Major:"1", Minor:"2", GitVersion:"v1.2.2", GitCommit:"528f879e7d3790ea4287687ef0ab3f2a01cc2718", GitTreeState:"clean"}
error: couldn't get version/kind; json parse error: json: cannot unmarshal string into Go value of type struct { APIVersion string "json:\"apiVersion,omitempty\""; Kind string "json:\"kind,omitempty\"" }
I deleted and reinstalled the gcloud SDK binaries and ran
mv ~/.config/gcloud ~/.config/gcloud~ignore
gcloud init
gcloud components update kubectl
How do I delete my local kubectl settings (on ubuntu 16.04) and start fresh ?
It's important to note that you've set a kubeconfig setting for your client. When you run kubectl version, you're getting the version for client and the server which in your case seems to be the issue with the version command.
Updating your config
You need to update the setting to the appropriate information. You can use the same command you used to set the server to change it to the correct server.
If you want to wipe the slate clean in terms of client config, you should remove the kubeconfig file(s). In my experience with the gcloud setup, this is just ~/.kube/config.
If you are running the cluster through google cloud engine, you can use gcloud to get the kubeconfig set for you as per the container engine quick start guide. The following assumes that you have defaults for the project, zone, and cluster set.
gcloud container clusters get-credentials CLUSTER_NAME
Removing kubectl - this isn't necessary
If your goal is to wholesale get rid of kubectl, you should remove the component rather than reseting gcloud.
gcloud components remove kubectl
But that won't solve your problem as it doesn't remove or reset ~/.kube/config when I run it on Mac and if you want to keep working with it, you'll need to reinstall kubectl.