I'm trying to access the ContextBroker configuration file in the path /etc/sysconfig/contextBroker and it's empty. What is the problem?
https://fiware-orion.readthedocs.io/en/master/admin/running/index.html
I'm using Docker.
Also I am testing the installation by yum centos and tells me that the repository is wrong.
Is it copied from the web?
https://github.com/telefonicaid/fiware-orion/blob/master/doc/manuals/admin/yum.md
The /etc/sysconfig/contextBroker is used in RPM-base deployment. Docker is based in compiling Context Broker directly from sources, as you can see in the docker file.
So, in this case, you have to use CLI based configuration. Note the docker is built with some of them:
ENTRYPOINT ["/usr/bin/contextBroker","-fg", "-multiservice", "-ngsiv1Autocast" ]
But you can add aditional ones. For example, in the reference docker-compose.yml we set -dbhost, and more ones could be added in the same way.
command: -dbhost mongo
I guess that using docker run you could also add commands in the same way command works in docker-compose.yml, although I don't know the details. Maybe some docker expert could add more info :)
Related
as a first... yes...yes I know there are 1000 questions and solutions to this. But unfortunately none of them helps me.
Let's get to the problem:
I have a Docker container running on which MySQL is configured. Now I would like to change the bind address from 127.0.0.1 to 0.0.0.0. Unfortunately I can't open my.cnf because I don't have nano, vim installed. With apk, yum, vim, apt-get and so on I get that:
apt-get: command not found
apk: command not found
...
Could someone of you maybe help me out with my little problem?
best thanks and greetings
The default for MySQL docker image has been changed to Oracle based Linux distribution. In this distribution, the default package manager is yum. If for whatever reason you still want to use apt, pull Debian image explicitly. Something like mysql:8-debian.
See this issue for more detail.
You could do a docker cp to copy the file out of the container, edit it, and then docker cp it back in again. This may be fine if you need to do this for troubleshooting, but you probably want to look at fixing this in your deployment process. You should be able to destroy and re-create the docker container without having to manually fix configurations. This should be handled in your Dockerfile, or perhaps copying the correct configuration file in in your docker compose file.
Openshfit details:
Paid Professional version.
Version Information:
Been trying to create a build from a Dockerfile in Openshift.
Its tough going.
So I tried to use the existing templates in the Cluster Console.
One of which is the Docker one. When i press "Try it" it generates a sample BuildConfig, when I try to then Create it, it gives me the error:
(i have now raised the above in the Origin upstream issue tracker)
Anyhoo...anyone know how to specify a buildConfig an image from a Dockerfile in a git repo? I would be grateful to know.
You can see the build strategies allowed for OpenShift Online on the product website: https://www.openshift.com/products/online. Dockerfile build isn't deprecated, it's just explicitly disallowed in OpenShift Online. You can build your Dockerfile locally and push it directly to the OpenShift internal registry (commands for docker login and docker push are on your cluster's About page).
However, in other environments (not OpenShift Online), you can specify a Dockerfile build as follows and providing a Git Repo with a Dockerfile contained within (located at BuildConfig.spec.source.contextDir)
strategy:
type: Docker
There are additional options that can be configured for a Dockerfile build as well, outlined in https://docs.okd.io/latest/dev_guide/builds/build_strategies.html#docker-strategy-options.
I have installed Kafka on a local Minikube by using the Helm charts https://github.com/confluentinc/cp-helm-charts following these instructions https://docs.confluent.io/current/installation/installing_cp/cp-helm-charts/docs/index.html like so:
helm install -f kafka_config.yaml confluentinc/cp-helm-charts --name kafka-home-delivery --namespace cust360
The kafka_config.yaml is almost identical to the default yaml, with the one exception being that I scaled it down to 1 server/broker instead of 3 (just because I'm trying to conserve resources on my local minikube; hopefully that's not relevant to my problem).
Also running on Minikube is a MySQL instance. Here's the output of kubectl get pods --namespace myNamespace:
I want to connect MySQL and Kafka, using one of the connectors (like Debezium MySQL CDC, for instance). In the instructions, it says:
Install your connector
Use the Confluent Hub client to install this
connector with:
confluent-hub install debezium/debezium-connector-mysql:0.9.2
Sounds good, except 1) I don't know which pod to run this command on, 2) None of the pods seem to have a confluent-hub command available.
Questions:
Does confluent-hub not come installed via those Helm charts?
Do I have to install confluent-hub myself?
If so, which pod do I have to install it on?
Ideally this should be configurable as part of the helm script, but unfortunately it is not as of now. One way to work around this is to build a new Docker from Confluent's Kafka Connect Docker image. Download the connector manually and extract the contents into a folder. Copy the contents of this to a path in the container. Something like below.
Contents of Dockerfile
FROM confluentinc/cp-kafka-connect:5.2.1
COPY <connector-directory> /usr/share/java
/usr/share/java is the default location where Kafka Connect looks for plugins. You could also use different location and provide the new location (plugin.path) during your helm installation.
Build this image and host it somewhere accessible. You will also have to provide/override the image and tag details during the helm installation.
Here is the path to the values.yaml file. You can find the image and plugin.path values here.
Just an add-on to Jegan's comment above: https://stackoverflow.com/a/56049585/6002912
You can choose to do the Dockerfile below. Recommended.
FROM confluentinc/cp-server-connect-operator:5.4.0.0
RUN confluent-hub install --no-prompt debezium/debezium-connector-postgresql:1.0.0
Or you can use a Docker's multi-stage build instead.
FROM confluentinc/cp-server-connect-operator:5.4.0.0
COPY --from=debezium/connect:1.0 \
/kafka/connect/debezium-connector-postgres/ \
/usr/share/confluent-hub-components/debezium-connector-postgres/
This will help you to save time on getting the right jar files for your plugins like debezium-connector-postgres.
From Confluent documentation: https://docs.confluent.io/current/connect/managing/extending.html#create-a-docker-image-containing-c-hub-connectors
The Kafka Connect pod should already have the confluent-hub installed. It is that pod you should run the commands on.
The cp kafka connect pod has 2 containers, one of them is a cp-kafka-connect-server container.That container has confluent-hub installed.You can login into that container and run your connector commands there.To login into that container, run the following command:
kubectl exec -it {pod-name} -c cp-kafka-connect-server -- /bin/bash
As of latest version of chart, this can be achieved using customEnv.CUSTOM_SCRIPT_PATH
See README.md
Script can be passed as a secret and mounted as a volume
I want to set the -Djboss.server.default.config env variable in my JBoss AS 7 cartridge.
I have tried using the action hook as follows
export _JAVA_OPTS=$_JAVA_OPTS"-Djboss.server.default.config=standalone-custom.xml"
and the file name is pre_start_jboss-as7.
This env is not set in JBoss. I tried restarting JBoss as well but still no luck.
I also tried from command prompt using rhc set-env command but still no luck.
Can anyone help me in setting this environment variable to my JBoss AS7 cartridge??
You can create a file in your gear path ~/jbosseap/env/ with name JAVA_OPTS_EXT
and put the env variable -Djboss.server.default.config in to this file
then when the jboss gear start, it will add this env after your JAVA_OPTS
As far i know, when you have your local git copy of the repository of your application, in that directory structure, there is an standalone.xml which is the one that JBoss loads in your Openshift gear; have you tried to work with that file instead?
I am migrating from DotCloud to Elastic Beanstalk.
Using DotCloud, they clearly explained how to set up Python Worker, and how to use supervisord.
Moving to Elastic Beanstalk, I am lost on how I could do that.
I have a script myworker.py and want to make sure it is always running. How?
Elastic Beanstalk is just a stack configuration tools over EC2, ELB and autoscaling.
One approach you can use, is create your own AMI, but since October last year, there is another approach that probably will be more suitable for your needs: ebextensions.
.ebextension is just a directory in your application, that get's detected once your application has been loaded by AWS.
Here is the full documentation: http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/customize-containers.html
With Amazon Linux 2 you need to use the .platform folder to supply elastic beanstalk with installation scripts.
We recommend using platform hooks to run custom code on your environment instances. You can still use commands and container commands in .ebextensions configuration files, but they aren't as easy to work with. For example, writing command scripts inside a YAML file can be cumbersome and difficult to test.
So you should add a prebuild hook (example) into a .platform folder to install supervisor and a postdeploy hook (example) to restart supervisor after each deployment.
There is an ini file (example) used in the script; which is made for laravel specific.
Make sure that the .sh files from the .platform folder are executable before deploying your project:
$ chmod +x .platform/hooks/prebuild/*.sh
$ chmod +x .platform/hooks/postdeploy/*.sh