Issues with helm install Orion Context Broker - fiware

I'm trying to install FIWARE Orion on AKS using your Helm chart. I installed MongoDB using
helm repo add azure-marketplace https://marketplace.azurecr.io/helm/v1/repo
helm install my-release azure-marketplace/mongodb
Consequently I configured the MongoDB in values.yaml as follows:
## database configuration
db:
# -- configuration of the mongo-db hosts. if multiple hosts are inserted, its assumed that mongo is running as a replica set
hosts: [my-release-mongodb]
# - my-release-mongodb
# -- the db to use. if running in multiservice mode, its used as a prefix.
name: orion
# -- Database authentication (not needed if MongoDB doesn't use --auth)
auth:
# --user for connecting mongo
user: root
# -- password to be used on mongo
password: mypasswd
# -- the MongoDB authentication mechanism to use in the case user and password is set
#mech: SCRAM-SHA-1
I use the command : helm install test orion
As I see this error in the pod logging I suppose something is wrong;
kubectl logs test-orion-7dfcc9c7fb-8vbgw
time=2021-05-28T19:50:29.737Z | lvl=ERROR | corr=N/A | trans=N/A | from=N/A | srv=N/A | subsrv=N/A | comp=Orion | op=mongocContextCachePersist.cpp[59]:mongocContextCachePersist | msg=Database Error (persisting context: command insert requires authentication)
Can you help me with this please?
Kind regards,
Johan,

you should assure that mongo-db is actually available at "my-release-mongodb:27017", you can use "kubectl get services" for that. Beside that, assure that "root:mypasswd" are actually the credentials setup at mongodb.

Related

How can i deploy Adonis-js 5 on Digital Ocean

I am trying to deploy Adonisjs 5 API using on Digital Ocean. I encountered an error relating to .env. Below is the code bbase of the error.
gmh#ubuntu-gmh:~/www/gmh-api$ cd build
gmh#ubuntu-gmh:~/www/gmh-api/build$ node ace migration:run --force
Exception
E_MISSING_ENV_VALUE: Missing environment variable "DB_CONNECTION"
gmh#ubuntu-gmh:~/www/gmh-api/build$
I have DB_CONNECTION declared in .env file
PORT=3333
HOST=127.0.0.1
NODE_ENV=production
DRIVE_DISK=local
DB_CONNECTION=mysql
Below is the database configuration code:
connection: Env.get('DB_CONNECTION'),
connections: {
/*
|--------------------------------------------------------------------------
| MySQL config
|--------------------------------------------------------------------------
|
| Configuration for MySQL database. Make sure to install the driver
| from npm when using this connection
|
| npm i mysql2
|
*/
mysql: {
client: 'mysql2',
connection: {
host: Env.get('MYSQL_HOST'),
port: Env.get('MYSQL_PORT'),
user: Env.get('MYSQL_USER'),
password: Env.get('MYSQL_PASSWORD', ''),
database: Env.get('MYSQL_DB_NAME'),
},
n/b: mysql is installed on the nginx server on Digital Ocean
Pls guys, an helpfull insight is welcome.
I tried running:
node ace migration:run --force
inside the build folder root, afterwhich i got:
Exception
E_MISSING_ENV_VALUE: Missing environment variable "DB_CONNECTION"
Move to the root directory of your project and run your migration command:
node ace migration:run --force
If you encounter this error:
create table `adonis_schema_versions` (`version` int not null) - Unable to create or change a table without a primary key, when the system variable 'sql_require_primary_key' is set.
Visit this answer in this link.

Use kubernetes with helm and provide my predefined user and password with bitnami correctly

I am using kubernetes with helm 3.
I need to create a kubernetes pod with sql - creating:
database name: my_database
user: root
password:12345
port: 3306
The steps:
creating chart by:
helm create test
after the chart is created, change the Chart.yaml file in test folder, by adding dependencies section.
apiVersion: v2
name: test3
description: A Helm chart for Kubernetes
version: 0.1.0
appVersion: "1.16.0"
dependencies:
name: mysql
version: 8.8.23 repository: "https://charts.bitnami.com/bitnami"
run:
helm dependencies build test
After that there is a compressed file tgz.
So I extracted it and there is tar file - I extracted it too, and leave only the final extracted folder.
I presume this isn't the best approach of changing parameter in yaml for bitnami,
and using also the security.yaml - I would like knowing that better approach too.
I need to change the user + password, and link to database,
so I changed the values.yaml directly (any better approach?), for values: auth:rootPassword and auth:my_database.
the another following steps:
helm build dependencies test
helm install test --namespace test --create-namespace
after that there are two pods created.
I could check it by:
kubectl get pods -n test
and I see two pods running (maybe replication).
one of the pod: test-mysql-0 (the other is with random parse).
run:
kubectl exec --stdin --tty test-mysql-0 --namespace test-mysql -- /bin/sh
did enter the pod.
run:
mysql -rroot -p12345;
and then:
show databases;
That did showing all the database, including seeing the created database: my_database, successfully.
When I tried openning the mysql database from 'mysql workbench', and test (same user: root, and password, and port: 3306, and localhost), I couldn't run test (test connection button in database properties returns: 'failed to connect to database').
Why cannot I run properly 'mysql workbench', while in the pad itself - without any particular problem?
Is there any better approach than extrating the tgz file as I described above, and can I pass in better way (some secured yaml) the user+password?
(Right now is only the root password)
Thanks.
It sounds like you're trying to set the parameters in the dependent chart (please correct me if I'm wrong)
If this is right, all you need to do is add another section in your chart's values.yaml
name-of-dependency:
user-name: ABC
password: abcdef
the "name-of-dependency" is specified in your Chart.yaml file when you declare your chart. For example, here's my redis dependency from one of my own charts
dependencies:
- name: redis
repository: https://charts.bitnami.com/bitnami/
version: x.x.x
Then when I install the chart, I can override the redis chart's settings by doing this in my own chart's values.yaml
redis:
architecture: standalone
auth:
password: "secret-password-here"

Connecting to MySQL 5.6 inside Docker For Desktop/Kubernetes: ERROR 1130 (HY000): Host 'xx.xx.xx.xx' is not allowed to connect to this MySQL server

I'm following theses instructions (page 181) to create a persistent volume & claim, a mysql replica set & service. I specify mysql v5.6 in the yaml file for the replica set.
After viewing the log for the pod, it looks like it is successful. So then I
kubectl run -it --rm --image=mysql --restart=Never mysql-client -- bash
mysql -h mysql -p 3306 -u root
It prompts me for the password and then I get this error:
ERROR 1130 (HY000): Host '10.1.0.17' is not allowed to connect to this MySQL server
Apparently MySQL has a feature where it does not allow remote connections by default and I have to change the configuration files and I don't know how to do that inside a yaml file. Below is my YAML. How do I change it to allow remote connections?
Thanks
Siegfried
cat <<END-OF-FILE | kubectl apply -f -
apiVersion: apps/v1
kind: ReplicaSet
metadata:
name: mysql
# labels so that we can bind a Service to this Pod
labels:
app: mysql
spec:
replicas: 1
selector:
matchLabels:
app: mysql
template:
metadata:
labels:
app: mysql
spec:
containers:
- name: tododata
image: mysql:5.6
resources:
requests:
cpu: 1
memory: 2Gi
env:
# Environment variables are not a best practice for security,
# but we're using them here for brevity in the example.
# See Chapter 11 for better options.
- name: MYSQL_ROOT_PASSWORD
value: some-password-here
livenessProbe:
tcpSocket:
port: 3306
ports:
- containerPort: 3306
volumeMounts:
- name: tododata
# /var/lib/mysql is where MySQL stores its databases
mountPath: "/var/lib/mysql"
volumes:
- name: tododata
persistentVolumeClaim:
claimName: tododata
END-OF-FILE
Sat Oct 24 2020 3PM EDT Update: Try Bitnami MySQL
I like Ben's idea of using bitnami mysql because then I don't have to create my own custom docker image. However, when using bitnami and trying to connect to they mysql server I get
ERROR 2003 (HY000): Can't connect to MySQL server on 'my-release-mysql.default.svc.cluster.local' (111)
This happens after I successfully get a bash shell with this command:
kubectl run my-release-mysql-client --rm --tty -i --restart='Never' --image docker.io/bitnami/mysql:8.0.22-debian-10-r0 --namespace default --command -- bash
Then, as per the instructions, I do this and get the HY000 error above.
mysql -h my-release-mysql.default.svc.cluster.local -uroot -p
Wed Nov 04 2020 Update:
Thanks Ben.. Yes -- I had already tried that on Oct 24 (approx) and when I do a k describe pod I get mysqladmin: connect to server at 'localhost' failed error: 'Can't connect to local MySQL server through socket '/opt/bitnami/mysql/tmp/mysql.sock' (2)' Check that mysqld is running and that the socket: '/opt/bitnami/mysql/tmp/mysql.sock' exists!.
Of course, when I run the mysql client as described in the nicely generated instructions, the client cannot connect because mysqld has died.
This is after having deleted the pvcs and stss and doing helm delete my-release prior to reinstalling via helm.
Unfortunately, when I tried this the first time (a couple of weeks ago) I did not set the root password and used the default generated password and I think it is still trying to use that.
This did work on azure kubernetes after having created a fresh azure kubernetes cluster. How can I reset my kubernetes cluster in my docker for desktop windows? I tried google searching and no luck so far.
Thanks
Siegfried
After a lot of help from the bitnami folks, I learned that my spinning disks on my 4 year old notebook computer are kinda slow (now why this is a problem with Bitnami MySQL and not Bitnami PostreSQL is a mystery).
This works for me:
helm install my-mysql bitnami/mysql \
--set image.debug=true \
--set primary.persistence.enabled=false,secondary.persistence.enabled=false \
--set primary.readinessProbe.enabled=false,primary.livenessProbe.enabled=false \
--set secondary.readinessProbe.enabled=false,secondary.livenessProbe.enabled=false
This turns off the peristent volumes so the data is lost when the pod dies.
Yes this is useful for me for development purposes and no one should be using Docker For Desktop/Kubernetes for production anyway... I just need to populate a tiny database and test my queries and if I need to repopulate database every time I reboot, well, that is not a big problem.
So maybe I need to get a new notebook computer? The price of notebook computers with 4TB of spinning disk space has gone up in the last couple of years.... And I cannot find any SSD drives of that size so even if I purchased a new replacement with spinning disks I might have the same problem? Hmm....
Thanks everyone for your help!
Siegfried
This appears to work just fine for me on windows. Complete the following steps:
helm repo add bitnami https://charts.bitnami.com/bitnami
helm install my-release --set root.password=awesomePassword bitnami/mysql
This is all you need to run the mysql instance. It does not makes a few services and a statefulset. Then, to connect to it, you
Either have to be in another another kubernetes container. Without this, you will not find the dns record for my-release-mysql.default.svc.cluster.local
run my-release-mysql-client --rm --tty -i --image docker.io/bitnami/mysql:8.0.22-debian-10-r0 --namespace default --command -- bash
mysql -h my-release-mysql.default.svc.cluster.local -uroot -p my_database
For the password, it should be 'awesomePassword'
Port forward the service to your local machine.
kubectl port-forward svc/my-release-mysql 3306:3306
As a note, a bitnami container will have issues if you kill it and restart it with only your helm commands and the password is not set. The persistent volume claim will usually stick around - so you would need to set the password to the old password. If you do not specify the password, you can get the password by running the commands bitnami tells you about.
NAME: my-release
LAST DEPLOYED: Thu Oct 29 20:39:23 2020
NAMESPACE: default
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES: Please be patient while the chart is being deployed
Tip:
Watch the deployment status using the command: kubectl get pods -w
--namespace default
Services:
echo Master: my-release-mysql.default.svc.cluster.local:3306 echo
Slave: my-release-mysql-slave.default.svc.cluster.local:3306
Administrator credentials:
echo Username: root echo Password : $(kubectl get secret
--namespace default my-release-mysql -o jsonpath="{.data.mysql-root-password}" | base64 --decode)
To connect to your database:
Run a pod that you can use as a client:
kubectl run my-release-mysql-client --rm --tty -i --restart='Never' --image docker.io/bitnami/mysql:8.0.22-debian-10-r0 --namespace default --command -- bash
To connect to master service (read/write):
mysql -h my-release-mysql.default.svc.cluster.local -uroot -p my_database
To connect to slave service (read-only):
mysql -h my-release-mysql-slave.default.svc.cluster.local -uroot -p my_database
To upgrade this helm chart:
Obtain the password as described on the 'Administrator credentials' section and set the 'root.password' parameter as shown
below:
ROOT_PASSWORD=$(kubectl get secret --namespace default my-release-mysql -o jsonpath="{.data.mysql-root-password}" | base64
--decode)
helm upgrade my-release bitnami/mysql --set root.password=$ROOT_PASSWORD

Unable to set endpoint using the Azure CLI

I used docker-machine with Azure as the driver to spin up a VM. I then deployed a simple nginx test container on to the host. My issue is that when I try to set and endpoint I am getting the following error:
azure vm endpoint create huldra 80 32769
info: Executing command vm endpoint create
+ Getting virtual machines
+ Reading network configuration
+ Updating network configuration
error: Parameter 'ConsoleScreenshotBlobUri' should not be set.
info: Error information has been recorded to /Users/ryan/.azure/azure.err
error: vm endpoint create command failed
When I look at the error log it pretty much repeats what the console said Parameter 'ConsoleScreenshotBlobUri' should not be set.
Here are my docker and azure environment details:
❯ docker info
Containers: 1
Running: 1
Paused: 0
Stopped: 0
Images: 3
Server Version: 1.10.2
Storage Driver: aufs
Root Dir: /var/lib/docker/aufs
Backing Filesystem: extfs
Dirs: 21
Dirperm1 Supported: true
Execution Driver: native-0.2
Logging Driver: json-file
Plugins:
Volume: local
Network: bridge null host
Kernel Version: 4.2.0-18-generic
Operating System: Ubuntu 15.10
OSType: linux
Architecture: x86_64
CPUs: 1
Total Memory: 1.636 GiB
Name: huldra
ID: PHUY:JRE3:DOJO:NNWO:JBBH:42H2:56ZO:HVSB:MZDE:QLOI:GO6F:SCC5
WARNING: No swap limit support
Labels:
provider=azure
~/Projects/dockerswarm master*
❯ docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
ce51127b2bb8 nginx "nginx -g 'daemon off" 11 minutes ago Up 11 minutes 0.0.0.0:32769->80/tcp, 0.0.0.0:32768->443/tcp machinenginx
❯ azure --version
0.9.17 (node: 5.8.0)
❯ azure vm list
info: Executing command vm list
+ Getting virtual machines
data: Name Status Location DNS Name IP Address
data: ------ --------- -------- ------------------- -------------
data: huldra ReadyRole West US huldra.cloudapp.net x.x.x.x
info: vm list command OK

Running non-www stuff on an Elastic Beanstalk Docker container

I want to run a SMTP server on a Docker container in Elastic Beanstalk, so in my Dockerfile I have exposed the port 25 (and no other ports)
EXPOSE 25
I also edited the beanstalk load balancer (using EC2 web admin) and added port 25 to it:
| LB Protocol | LB Port | Instance Protocol | Instance Port | SSL |
| TCP | 25 | TCP | 25 | N/A |
....
And edited the security group of the instance to allow inbound TCP traffic to port 25 (allowed all locations to be able to connect to the instance directly).
Doesn't seem to work though. If I use the same Dockerfile in Virtualbox (using option -p 25:25) I can connect to the port 25 through the host machine and the SMTP server is listening. If I run the container in Elastic Beanstalk using the before-mentioned configuration I can't connect to the port 25 neither using the load balancer or directly the EC2 instance.
Any ideas what I'm doing wrong here?
Instead of editing the Load Balancer configuration directly from EC2 web admin it is recommended you do it using elasticbeanstalk ebextensions because those changes persist for your environment even if your EC2 instances in the auto-scaling group are replaced.
Can you try the following?
Create a file "01-elb.config" in a folder called .ebextensions in your app source with the following contents:
option_settings:
- namespace: aws:cloudformation:template:parameter
option_name: InstancePort
value: 25
Resources:
AWSEBLoadBalancer:
Type: AWS::ElasticLoadBalancing::LoadBalancer
Properties:
Listeners:
- InstancePort: 25
LoadBalancerPort: 80
Protocol: TCP
- InstancePort: 25
LoadBalancerPort: 25
Protocol: TCP
AvailabilityZones:
- us-west-2a
us-west-2b
us-west-2c
HealthCheck:
Timeout: 5
Target: TCP:25
Interval: 30
HealthyThreshold: 3
UnhealthyThreshold: 5
This file is in YAML format and hence indentation is important.
The option setting ('aws:cloudformation:template:parameter', 'InstancePort') sets the instance port to 25 and also modifies the security group to make sure that port 25 is accessible by the load balancer.
This file is overriding the default Load Balancer Resource created by Elastic Beanstalk with two listeners both having instance port set to 25. Hope that helps.
Read more about customizing your environment with ebextensions here.
Can you try creating a new environment with the above file in .ebextensions/01-elb.config file in the appsource directory? Let me know if you run into any issues.