PERSEO_NOTICES_PATH='/notices',PERSEO_RULES_PATH='/rules' create subscription 2 Orion from Cep & how notify rules & subscription between Orion & Cep - fiware

I want create a subscription From PERSEO CEP to Orion CB so that when a attribute change Perseo Cep throws a rule.
How to use these 3 directives:
- PERSEO_NOTICES_PATH='/notices',
- PERSEO_RULES_PATH='/rules'
- MAX_AGE
In - MAX_AGE I want to set it to last forever o for a lot of years.
perseo-core:
image: fiware/perseo-core
hostname: perseo-core
container_name: fiware-perseo-core
depends_on:
- mongo-db
- orion
networks:
- smartcity
ports:
- "8080:8080"
environment:
- PERSEO_FE_URL=http://perseo-fe:9090
- MAX_AGE=9999
perseo-front:
image: telefonicaiot/perseo-fe
image: fiware/perseo
hostname: perseo-fe
container_name: fiware-perseo-fe
networks:
- smartcity
ports:
- "9090:9090"
depends_on:
- perseo-core
environment:
- PERSEO_ENDPOINT_HOST=perseo-core
- PERSEO_ENDPOINT_PORT=8080
- PERSEO_MONGO_HOST=mongo-db
- PERSEO_MONGO_URL=http://mongo-db:27017
- PERSEO_MONGO_ENDPOINT=mongo-db:27017
- PERSEO_ORION_URL=http://orion:1026/
- PERSEO_LOG_LEVEL=debug
- PERSEO_CORE_URL=http://perseo-core:8080
- PERSEO_SMTP_SECURE=true
- PERSEO_MONGO_USER:"root"
- PERSEO_MONGO_PASSWORD:"example"
- PERSEO_SMTP_HOST=x
- PERSEO_SMTP_PORT=25
- PERSEO_SMTP_AUTH_USER=x
- PERSEO_SMTP_AUTH_PASS=x
- PERSEO_NOTICES_PATH='/notices'
- PERSEO_RULES_PATH='/rules'

You can find basic information about CB subscriptions in the NGSIv2 API walkthrough and the full detail in the NGSIv2 Specification ("Subscriptions" section).
In this case, you have to set as notification endpoint the one corresponding to Perseo. Taking into account the above configuration for PERSEO_ENDPOINT_PORT and PERSEO_NOTICES_PATH it should be something like this:
...
"notification": {
"http": {
"url": "http://<perseohost>:8080/notices"
},
...
EDIT: maybe port is 9090 instead of 8080. Not fully sure (9090 could be the port in the Perseo FE, where /notices is listening while 8080 is the port that Perseo FE uses to contact with Perseo Core)

In the rule creation, when I send the rule I had http://perseo-coreip:8080/perseo-core/rules and it is not correct,
the correct is: http://perseo-fe-ip:9090/rules, with that it works.
Store the rule in mongodb and fire the rule properly.

Related

Scorpio broker - environment variables set in docker-compose-aaio.yml not getting picked up at runtime

I am running Scorpio using the docker-compose file: docker-compose-aaio.yml but I want to use an RDS postgres instance rather than a container instance. I have updated the docker-compose-aaio.yml file as follows:
version: '3'
services:
zookeeper:
image: zookeeper
ports:
- "2181"
kafka:
image: wurstmeister/kafka
hostname: kafka
ports:
- "9092"
environment:
KAFKA_ADVERTISED_HOST_NAME: kafka
KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
KAFKA_ADVERTISED_PORT: 9092
volumes:
- /var/run/docker.sock:/var/run/docker.sock
depends_on:
- zookeeper
logging:
driver: none
scorpio:
image: scorpiobroker/scorpio:scorpio-aaio_1.0.2
ports:
- "9090:9090"
depends_on:
- kafka
environment:
spring_args: --maxLimit=1000 --reader.datasource.hikari.url=jdbc:postgresql//myrdshost.eu-west-2.rds.amazonaws.com:5432/ngb?ApplicationName=ngb_storagemanager_reader --writer.datasource.hikari.url=jdbc:postgresql//myrdshost.eu-west-2.rds.amazonaws.com:5432/ngb?ApplicationName=ngb_storagemanager_writer
However when I run this with: docker-compose -f docker-compose-aaio.yml up I get this error:
java.net.UnknownHostException: postgres
as though the Scorpio broker is still trying to use the default postgres database urls (i.e trying to use a containerised postgres instance). It's seems like the environment variables I set under spring_args are not getting applied.
I have followed the documentation in Chapters 4 and 5 here: https://scorpio.readthedocs.io/_/downloads/en/latest/pdf/.
Can you see anything I am doing wrong?
Thanks!
there is another config parameter in spring for postgres.
spring.datasource.hikari.url
This is missing in documentation. I'll update it for the next release.
For the full set of changeable options you can have a look in the application.yml
https://github.com/ScorpioBroker/ScorpioBroker/blob/development/AllInOneRunner/src/main/resources/application.yml
Good Luck
Benjamin
Regarding scorpio version scorpio-aaio-no-eureka_2.1.20 you just have to set the DBHOST and DBPORT environment variable.
scorpio:
image: scorpiobroker/scorpio-aaio-no-eureka_2.1.20
environment:
- DBHOST: "example.com"
- DBPORT: "5432"
The names of the environment variables follow the options from application.yml using underscores instead of period-separation.

Getting mysql connection issue when scaling the mysql container to more than 1 in docker swarm

I have a host machine running in swarm mode. I am running it on single machine now, no clusters (no multiple machine).
The services are running fine. I have created a volume for the mysql container. I believe when the mysql container is scaled they all will read from the same volume.
Here is the docker-compose. Which works Great and no mysql connection issue but when I scale the mysql container to 2
version: "3.4"
services:
node:
image: prod_engineering_node:v7
networks:
- backend
volumes:
- ./codebase:/usr/src/app
ports:
- "8082:8082"
depends_on:
- engineeringmysql
deploy:
mode: replicated
replicas: 1
placement:
constraints:
- node.role == manager
mysql:
image: prod_engineering_mysql:v1
command: mysqld --default-authentication-plugin=mysql_native_password
networks:
- backend
ports:
- "3309:3306"
environment:
MYSQL_ROOT_PASSWORD: main_pass
MYSQL_DATABASE: engineering
MYSQL_USER: user
MYSQL_PASSWORD: pass
volumes:
- ./sqldata:/var/lib/mysql:rw
deploy:
mode: replicated
replicas: 1
placement:
constraints:
- node.role == manager
nginx:
image: prod_engineering_nginx:v1
ports:
- "80:80"
- "443:443"
volumes:
- ./angular_build:/var/www/html/studydote_v2/frontend:rw
- ./laravel_admin:/var/www/html/dev/backend/public:rw
networks:
- backend
depends_on:
- engineeringphpfpm
deploy:
mode: replicated
replicas: 1
placement:
constraints:
- node.role == manager
phpfpm:
image: prod_engineering_phpfpm:v1
ports:
- "9001:9000"
depends_on:
- engineeringmysql
networks:
- backend
volumes:
- ./angular_build:/var/www/html/studydote_v2/frontend:rw
- ./laravel_admin:/var/www/html/dev/backend/public:rw
deploy:
mode: replicated
replicas: 1
placement:
constraints:
- node.role == manager
networks:
backend:
driver: overlay
This is how i scaled the mysql container.
docker service scale servicename=2
Now I get the db connections issue.
Can anyone help me with it? What might be the issue? If this is the wrong way to scale mysql db, please suggest me what are the better ways.
When you start a service, Docker swarm will assign a virtual IP address to each service, and load-balance all requests to that IP to each of the replica-containers.
What probably happens (but it's hard to see without the full logs), is that the tcp connection gets loadbalanced across both DBs: the first connection would go to nr1, the second one to nr2 etc.
However, mysql connections are stateful, not stateless. So this way is scaling your db isn't going to work. Also note that Docker won't handle the Mysql replication work for you. What people typically do, is:
avoid having to run multiple DB instances if you don't need to
run 2 mysql services: a mysql-master and a mysql-slave, each with their own config
do some intelligent service discovery in a startup script in your mysql image

Setting up lwm2m-node-lib to FIWARE platform

Having reached a stumbling block with my wakaama LWM2M implementation for couple of weeks, as I reported in #154 I have no option than to try using telefonica lwm2m-node-lib instead.
To make my point clear again, I have already IOTA, Orion, MongoDB, Cygnus all working fine. It is my client implementation that isn't sending measure to IOTA despite being able to connect. The scenario I want is LWM2M -> IOTA -> Orion -> Cygnus -> MongoDB.
My issue now: I want have a precise explanation of configuration I need to do to have lwm2m-node-lib implementation work here, for instance where to input the server IP to connect to (where my FIWARE is running), which file to edit etc. I already picked a new device to use, keeping aside the other.
My docker-compose file below:
version: "3.1"
services:
mongo:
image: mongo:3.4
hostname: mongo
container_name: fiware-mongo
ports:
- "27017:27017"
networks:
- default
command: --nojournal
orion:
image: fiware/orion
hostname: orion
container_name: fiware-orion
depends_on:
- mongo
networks:
- default
ports:
- "1026:1026"
expose:
- "1026"
command: -dbhost mongo -logLevel DEBUG
lightweightm2m-iotagent:
image: telefonicaiot/lightweightm2m-iotagent
hostname: idas
container_name: fiware-iotagent
depends_on:
- mongo
networks:
- default
expose:
- "4041"
- "5684"
ports:
- "4041:4041"
- "5684:5684/udp"
environment:
- "IOTA_CB_HOST=orion"
- "IOTA_CB_PORT=1026"
- "IOTA_NORTH_PORT=4041"
- "IOTA_REGISTRY_TYPE=mongodb"
- "IOTA_LOG_LEVEL=DEBUG"
- "IOTA_TIMESTAMP=true"
- "IOTA_MONGO_HOST=mongo"
- "IOTA_MONGO_PORT=27017"
- "IOTA_MONGO_DB=lwm2miotagent"
- "IOTA_HTTP_PORT=5684"
- "IOTA_PROVIDER_URL=http://lightweightm2m-iotagent:4041"
cygnus:
image: fiware/cygnus-ngsi:latest
hostname: cygnus
container_name: fiware-cygnus
depends_on:
- mongo
networks:
- default
expose:
- "5080"
ports:
- "5050:5050"
- "5080:5080"
environment:
- "CYGNUS_MONGO_HOSTS=mongo:27017"
- "CGYNUS_LOG_LEVEL_=DEBUG"
- "CYGNUS_SERVICE_PORT=5050"
- "CYGNUS_API_PORT=5080"
You can have a look to:
https://hub.docker.com/r/fiware/lightweightm2m-iotagent/
There you have a very good explanation in how to use the IOTA-LWM2M docker, along with configuration examples to run it with Orion.

“AZF domain not created for application” AuthZforce

I have an application that uses the KeyRock, PEP, PDP(AuthZForce).
The security level 1 (authentication) with Keyrock and PEP are working, but when we try to use AuthZForce to check the authorization, I get the error message:
AZF domain not created for application
I have my user and my application that I created following the steps on the Fiware IdM User and Programmers Guide.
I am also able to create domains as stated in the AuthZForce - Installation and Administration Guide but I don't know how to bind the Domain ID with user roles when creating them.
So, how can I insert users/organizations/applications under a specific domain, and then have the security level 2?
My config.js file:
config.azf = {
enabled: true,
host: '192.168.99.100',
port: 8080,
path: '/authzforce/domains/',
custom_policy: undefined
};
And my docker-compose.yml file is:
authzforce:
image: fiware/authzforce-ce-server:release-5.4.1
hostname: authzforce
container_name: authzforce
ports:
- "8080:8080"
keyrock:
image: fiware/idm:v5.4.0
hostname: keyrock
container_name: keyrock
ports:
- "5000:5000"
- "8000:8000"
pepproxy:
build: Docker/fiware-pep-proxy
hostname: pepproxy
container_name: pepproxy
ports:
- 80:80
links:
- authzforce
- keyrock
This question is the same that AuthZForce Security Level 2: Basic Authorization error "AZF domain not created for application", but I get the same error, and my keyrock version is v5.4.0.
I changed the AuthZForce GE Configuration:
http://fiware-idm.readthedocs.io/en/latest/admin_guide.html#authzforce-ge-configuration
After reviewing the horizon source code I found that function "policyset_update" in openstack_dashboard/fiware_api/access_control_ge.py returns inmediatly if ACCESS_CONTROL_MAGIC_KEY is None (the default configuration) or an empty string,so the communication with AuthZForce never takes place. Despite this parameter is optional when you don't have AuthZForce behind a PEP Proxy, you have to enter some text to avoid this error.
In your case, your string 'undefined' did the work. In fact, as result, a 'X-Auth-Token: undefined' is generated, but ignored when horizon communicates directly with AuthZForce.
Related topic: Fiware AuthZForce error: "AZF domain not created for application"

AuthZForce Security Level 2: Basic Authorization error "AZF domain not created for application"

We are trying to deploy our security layer (KeyRock, Wilma, AuthZForce) to protect our Orion instance.
We are able to have security level 1 (authentication) with Keyrock and Wilma working, but when we try to insert AuthZForce to check the verb+resource authorization we get the error message:
AZF domain not created for application
In the PEP Proxy User Guide, under "Level 2: Basic Authorization" section, it is stated that we have to configure the roles and permissions for the user in the application. I have created my user and registered my application following the steps on the Fiware IdM User and Programmers Guide. I also created an additional rule to match exactly the resource that I'm trying to GET to guarantee that there is no path mistake.
I am also able to create domains as stated in the AuthZForce - Installation and Administration Guide but I don't know how to bind the Domain ID with user roles when creating them. I've searched in the IdM GUI and in the documentation but I couldn't find how to do it.
So, how can I insert users/organizations/applications under a specific domain, and then have the security level 2?
Update:
My Wima's config.js file has this section:
...
config.azf = {
enabled: true,
host: 'authzforce',
port: 8080,
path: '/authzforce/domains/',
custom_policy: undefined
};
...
And my docker-compose.yml file is:
pepwilma:
image: ging/fiware-pep-proxy
container_name: test_pepwilma
hostname: pepwilma
volumes:
- ./wilma/config.js:/opt/fiware-pep-proxy/config.js
links:
- idm
- authzforce
ports:
- "88:80"
idm:
image: fiware/idm
container_name: test_idm
links:
- authzforce
ports:
- "5000:5000"
- "8000:8000"
authzforce:
image: fiware/authzforce-ce-server
container_name: test_authzforce
hostname: authzforce
ports:
- "8080:8080"
Is the error AZF domain not created reported by KeyRock or Wilma?