“AZF domain not created for application” AuthZforce - fiware

I have an application that uses the KeyRock, PEP, PDP(AuthZForce).
The security level 1 (authentication) with Keyrock and PEP are working, but when we try to use AuthZForce to check the authorization, I get the error message:
AZF domain not created for application
I have my user and my application that I created following the steps on the Fiware IdM User and Programmers Guide.
I am also able to create domains as stated in the AuthZForce - Installation and Administration Guide but I don't know how to bind the Domain ID with user roles when creating them.
So, how can I insert users/organizations/applications under a specific domain, and then have the security level 2?
My config.js file:
config.azf = {
enabled: true,
host: '192.168.99.100',
port: 8080,
path: '/authzforce/domains/',
custom_policy: undefined
};
And my docker-compose.yml file is:
authzforce:
image: fiware/authzforce-ce-server:release-5.4.1
hostname: authzforce
container_name: authzforce
ports:
- "8080:8080"
keyrock:
image: fiware/idm:v5.4.0
hostname: keyrock
container_name: keyrock
ports:
- "5000:5000"
- "8000:8000"
pepproxy:
build: Docker/fiware-pep-proxy
hostname: pepproxy
container_name: pepproxy
ports:
- 80:80
links:
- authzforce
- keyrock
This question is the same that AuthZForce Security Level 2: Basic Authorization error "AZF domain not created for application", but I get the same error, and my keyrock version is v5.4.0.

I changed the AuthZForce GE Configuration:
http://fiware-idm.readthedocs.io/en/latest/admin_guide.html#authzforce-ge-configuration

After reviewing the horizon source code I found that function "policyset_update" in openstack_dashboard/fiware_api/access_control_ge.py returns inmediatly if ACCESS_CONTROL_MAGIC_KEY is None (the default configuration) or an empty string,so the communication with AuthZForce never takes place. Despite this parameter is optional when you don't have AuthZForce behind a PEP Proxy, you have to enter some text to avoid this error.
In your case, your string 'undefined' did the work. In fact, as result, a 'X-Auth-Token: undefined' is generated, but ignored when horizon communicates directly with AuthZForce.
Related topic: Fiware AuthZForce error: "AZF domain not created for application"

Related

PERSEO_NOTICES_PATH='/notices',PERSEO_RULES_PATH='/rules' create subscription 2 Orion from Cep & how notify rules & subscription between Orion & Cep

I want create a subscription From PERSEO CEP to Orion CB so that when a attribute change Perseo Cep throws a rule.
How to use these 3 directives:
- PERSEO_NOTICES_PATH='/notices',
- PERSEO_RULES_PATH='/rules'
- MAX_AGE
In - MAX_AGE I want to set it to last forever o for a lot of years.
perseo-core:
image: fiware/perseo-core
hostname: perseo-core
container_name: fiware-perseo-core
depends_on:
- mongo-db
- orion
networks:
- smartcity
ports:
- "8080:8080"
environment:
- PERSEO_FE_URL=http://perseo-fe:9090
- MAX_AGE=9999
perseo-front:
image: telefonicaiot/perseo-fe
image: fiware/perseo
hostname: perseo-fe
container_name: fiware-perseo-fe
networks:
- smartcity
ports:
- "9090:9090"
depends_on:
- perseo-core
environment:
- PERSEO_ENDPOINT_HOST=perseo-core
- PERSEO_ENDPOINT_PORT=8080
- PERSEO_MONGO_HOST=mongo-db
- PERSEO_MONGO_URL=http://mongo-db:27017
- PERSEO_MONGO_ENDPOINT=mongo-db:27017
- PERSEO_ORION_URL=http://orion:1026/
- PERSEO_LOG_LEVEL=debug
- PERSEO_CORE_URL=http://perseo-core:8080
- PERSEO_SMTP_SECURE=true
- PERSEO_MONGO_USER:"root"
- PERSEO_MONGO_PASSWORD:"example"
- PERSEO_SMTP_HOST=x
- PERSEO_SMTP_PORT=25
- PERSEO_SMTP_AUTH_USER=x
- PERSEO_SMTP_AUTH_PASS=x
- PERSEO_NOTICES_PATH='/notices'
- PERSEO_RULES_PATH='/rules'
You can find basic information about CB subscriptions in the NGSIv2 API walkthrough and the full detail in the NGSIv2 Specification ("Subscriptions" section).
In this case, you have to set as notification endpoint the one corresponding to Perseo. Taking into account the above configuration for PERSEO_ENDPOINT_PORT and PERSEO_NOTICES_PATH it should be something like this:
...
"notification": {
"http": {
"url": "http://<perseohost>:8080/notices"
},
...
EDIT: maybe port is 9090 instead of 8080. Not fully sure (9090 could be the port in the Perseo FE, where /notices is listening while 8080 is the port that Perseo FE uses to contact with Perseo Core)
In the rule creation, when I send the rule I had http://perseo-coreip:8080/perseo-core/rules and it is not correct,
the correct is: http://perseo-fe-ip:9090/rules, with that it works.
Store the rule in mongodb and fire the rule properly.

Docker-Compose Services Not Communicating

Docker noob alert. Hope this isn't a dumb question but I cannot seem to figure out what is going on. I am trying to create a docker-compose file which creates a mysql db with a mounted volume and a go webserver app that connects to the mysql db.
Here is my docker-compose file:
services:
db:
image: mysql:8.0.2
environment:
MYSQL_ROOT_PASSWORD: test
MYSQL_DATABASE: northernairport
ports:
- "3306:3306"
volumes:
- /data:/var/lib/mysql
web:
depends_on:
- db
build: .
ports:
- "8080:8080"
My go application can't seem to connect to my mysql db though, I thought the "depends_on" would ensure this was possible.
Error I get:
panic: dial tcp 127.0.0.1:3306: getsockopt: connection refused
Can anyone tell me what I am doing wrong here? Thanks.
The depends_on only controls the build and startup order for the services.
Your actual issue is more likely that you are using the wrong address from your web application to your database. I see that you have not defined any networks, so you are using the default network created for your application by docker-compose. This will publish each service by name on the default network's DNS.
So, your web application should probably be using db:3306 as the database address, not localhost:3306 or 127.0.0.1:3306 as indicated in the error message.
The ports part is used to map container ports with host in following format ports (HOST:CONTAINER). Which means that you are trying to access host's machine, configure web app to connect to db:3306 instead.

Unable to connect my app to MySql service in Docker composer network

Updated: see below
I'm new to Docker and trying to compose a .NET Core 2.0 web API in a Ubuntu 18 host, using Docker 18.05.0-ce (build f150324) and these services, all in the same network:
SQL Server database;
MySql database (https://github.com/docker-library/docs/tree/master/mysql/);
Mongo database.
My docker compose file provides these services from their respective images, as reported below. In short:
SQL Server from image microsoft/mssql-server-linux, port 1433, sa user password set via environment variable SA_PASSWORD after accepting the EULA via ACCEPT_EULA;
MongoDB from image mongo, port 27017;
MySql from image mysql/mysql-server, port 3306, password for the root user set via the environment variable MYSQL_ROOT_PASSWORD. I essentially used these sources to configure the service: https://github.com/docker-library/docs/tree/master/mysql#mysql_database and https://docs.docker.com/samples/library/mysql/#environment-variables.
Of course, the web API accessing these services uses other credentials in its development environment, but I'm overriding them via environment variables to adjust the system to Docker (in ASPNET Core 2, as you can see from https://github.com/aspnet/MetaPackages/blob/dev/src/Microsoft.AspNetCore/WebHost.cs, the CreateDefaultBuilder method already includes the environment variables as a configuration source):
DATA__DEFAULTCONNECTION__CONNECTIONSTRING: the connection string to Sql Server, using SA with the same password set for the Docker service (see above): "Server=sqlserver\\sqlexpress,1433;Database=lexmin;User Id=SA;Password=P4ss-W0rd!;MultipleActiveResultSets=true". Note that for Windows we use : as the configuration hierarchy separator, but for non-Windows OSes we must use __ (found it out at https://github.com/aspnet/Configuration/issues/469).
SERILOG__CONNECTIONSTRING is the same connection used by the Serilog logger.
LEX__CONNECTIONSTRING is the MongoDB connection string: mongodb://mongo:27017/lex_catalog.
ZAX__CONNECTIONSTRING is the MySql connection string: Server=mysql;Database=zax_master;Uid=root;Pwd=password;SslMode=none.
the other environment variables are for using the MySql and Mongo command line tools for dumping a database, as I should invoke it from my API (as you can see from the script, I still have to find out the exact location of these executables in the Ubuntu environment, but this is a detail).
Here is the corresponding appsettings.json in my web API (shortened), which shows the paths corresponding to the environment variables names:
{
"Data": {
"DefaultConnection": {
"ConnectionString": "..."
}
},
"Serilog": {
"ConnectionString": "...",
"TableName": "Log",
"MinimumLevel": {
"Default": "Debug",
"Override": {
"Microsoft": "Information",
"System": "Warning"
}
}
},
"Lex": {
"ConnectionString": "..."
},
"Zax": {
"ConnectionString": "..."
},
"Environment": {
"MongoDirectory": "...",
"MySqlDirectory": "...",
"MySqlDumpUser": "root",
"MySqlDumpPassword": "..."
}
}
Now, when I run docker-compose up, all the services start OK, MySql included; yet, my app throws an exception when trying to connect to MySql: Application startup exception: MySql.Data.MySqlClient.MySqlException (0x80004005): Host '172.19.0.5' is not allowed to connect to this MySQL server.
Could anyone help? Here is my composer script:
version: '3.4'
services:
# SQL Server at default port
lexminmssql:
image: microsoft/mssql-server-linux
container_name: sqlserver
environment:
ACCEPT_EULA: Y
SA_PASSWORD: "P4ss-W0rd!"
ports:
- 1433:1433
networks:
- lexminnetwork
# MongoDB - at default port
lexminmongo:
image: mongo
container_name: mongo
ports:
- 27017:27017
networks:
- lexminnetwork
# MySql at default port
lexminmysql:
image: mysql/mysql-server
container_name: mysql
environment:
# the password that will be set for the MySQL root superuser account
MYSQL_ROOT_PASSWORD: "password"
ports:
- 3306:3306
networks:
- lexminnetwork
# Web API
lexminapi:
image: naftis/lexminapi
ports:
- 58942:58942
depends_on:
- lexminmssql:
condition: service_healthy
- lexminmongo:
condition: service_healthy
- lexminmysql:
condition: service_healthy
build:
context: .
dockerfile: LexminApi/Dockerfile
environment:
# for Windows use : as separator, for non Windows use __
# (see https://github.com/aspnet/Configuration/issues/469)
DATA__DEFAULTCONNECTION__CONNECTIONSTRING: "Server=sqlserver\sqlexpress,1433;Database=lexmin;User Id=SA;Password=P4ss-W0rd!;MultipleActiveResultSets=true"
SERILOG__CONNECTIONSTRING: "Server=sqlserver\sqlexpress,1433;Database=lexmin;User Id=SA;Password=P4ss-W0rd!;MultipleActiveResultSets=true"
LEX__CONNECTIONSTRING: "mongodb://mongo:27017/lex_catalog"
ZAX__CONNECTIONSTRING: "Server=mysql;Database=zax_master;Uid=root;Pwd=password;SslMode=none"
# TODO: locate BIN directories in Linux
ENVIRONMENT__MONGODIRECTORY: ""
ENVIRONMENT__MYSQLDIRECTORY: ""
ENVIRONMENT__MYSQLDUMPUSER: "root"
ENVIRONMENT__MYSQLDUMPPASSWORD: "password"
networks:
- lexminnetwork
volumes:
- ./zax-users.xml:/etc/lexmin/zax-users.xml
# Web app
# TODO
networks:
lexminnetwork:
driver: bridge
ADDITION
Thank you, I'm trying to work on these connection issues one at a time, as I found out that the same happens for SQL Server in my composed containers. So I'm extending the question to SQL Server too, but I suppose it's better keeping the discussion in the same post as the issues seem similar.
I started with a smaller set of containers, ruling out MySql at present. My composer is:
version: '3.4'
services:
# SQL Server
cadmusmssql:
image: microsoft/mssql-server-linux
container_name: cadmussqlserver
environment:
ACCEPT_EULA: Y
SA_PASSWORD: "P4ss-W0rd!"
ports:
- 1433:1433
networks:
- cadmusnetwork
# MongoDB
cadmusmongo:
image: mongo
container_name: cadmusmongo
environment:
- MONGO_DATA_DIR=/data/db
- MONGO_LOG_DIR=/dev/null
ports:
- 27017:27017
command: mongod --smallfiles --logpath=/dev/null # --quiet
networks:
- cadmusnetwork
cadmusapi:
image: ...myprivaterepoimage...
environment:
- ASPNETCORE_ENVIRONMENT=Production
ports:
- 60304:60304
depends_on:
- cadmusmssql
- cadmusmongo
environment:
DATA__DEFAULTCONNECTION__CONNECTIONSTRING: "Server=127.0.0.1\\sqlexpress,1433;Database=cadmusapi;User Id=SA;Password=P4ss-W0rd!;MultipleActiveResultSets=true"
SERILOG__CONNECTIONSTRING: "Server=127.0.0.1\\sqlexpress,1433;Database=cadmusapi;User Id=SA;Password=P4ss-W0rd!;MultipleActiveResultSets=true"
networks:
- cadmusnetwork
networks:
cadmusnetwork:
driver: bridge
By googling around, I found that it is required to explicitly set the port number in the connection string. Thus, even if currently I'm overriding environment variables in the docker compose file, I added them in my appsettings.Production.json, too. In my Program.cs Main method, I setup the configuration like this (for Serilog: see http://www.carlrippon.com/?p=1118):
IConfiguration configuration = new ConfigurationBuilder()
.SetBasePath(Directory.GetCurrentDirectory())
.AddJsonFile("appsettings.json", optional: false, reloadOnChange: true)
.AddJsonFile(
$"appsettings.{Environment.GetEnvironmentVariable("ASPNETCORE_ENVIRONMENT") ?? "Production"}.json",
optional: true)
.Build();
So this overrides the appsettings.json file with appsettings.Production.json if ASPNETCORE_ENVIRONMENT is not specified. Anyway, just to make it clearer, in my composer I have added it:
environment:
+ ASPNETCORE_ENVIRONMENT=Production
To check my environment variables, I added code to dump it at my app startup. The dump has the expected connection strings:
cadmusapi_1 | ASPNETCORE_PKG_VERSION = 2.0.8
cadmusapi_1 | ASPNETCORE_URLS = http://+:80
cadmusapi_1 | DATA__DEFAULTCONNECTION__CONNECTIONSTRING = Server=127.0.0.1\sqlexpress,1433;Database=cadmusapi;User Id=SA;Password=P4ss-W0rd!;MultipleActiveResultSets=true
cadmusapi_1 | DOTNET_DOWNLOAD_SHA = d8f6035a591b5500a8b81188d834ed4153c4f44f1618e18857c610d0b332d636970fd8a980af7ae3fbff84b9f1da53aa2f45d8d305827ea88992195cd5643027
cadmusapi_1 | DOTNET_DOWNLOAD_URL = https://dotnetcli.blob.core.windows.net/dotnet/Runtime/2.0.7/dotnet-runtime-2.0.7-linux-x64.tar.gz
cadmusapi_1 | DOTNET_RUNNING_IN_CONTAINER = true
cadmusapi_1 | DOTNET_VERSION = 2.0.7
cadmusapi_1 | HOME = /root
cadmusapi_1 | HOSTNAME = 29884ca26699
cadmusapi_1 | PATH = /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
cadmusapi_1 | SERILOG__CONNECTIONSTRING = Server=127.0.0.1\sqlexpress,1433;Database=cadmusapi;User Id=SA;Password=P4ss-W0rd!;MultipleActiveResultSets=true
Here is the dump code, maybe it might be useful for a quick copy-and-paste:
private static void DumpEnvironment()
{
IDictionary dct = Environment.GetEnvironmentVariables();
List<string> keys = new List<string>();
var enumerator = dct.GetEnumerator();
while (enumerator.MoveNext())
{
keys.Add(((DictionaryEntry)enumerator.Current).Key.ToString());
}
foreach (string key in keys.OrderBy(s => s))
Console.WriteLine($"{key} = {dct[key]}");
}
Yet, I keep getting the connection error from SqlServer like: Application startup exception: System.Data.SqlClient.SqlException (0x80131904): A network-related or instance-specific error occurred while establishing a connection to SQL Server. The server was not found or was not accessible. Verify that the instance name is correct and that SQL Server is configured to allow remote connections. (provider: TCP Provider, error: 35 - An internal exception was caught).
As the services start, I tried to connect to them from my (Ubuntu) Docker host with IP 127.0.0.1. I installed SQL Operations Studio, and connected to 127.0.0.1\sqlexpress,1433 with username SA and the password specified in the compose file, and this works fine. So, how does it happen that the same authentication parameters fail when used from my ASP.NET Core app in its container?
This to me looks like a MySQL security error. Basically you need to configure MySQL to allow external connections.
There's a very good thread here that goes through the trouble shooting steps to get this sorted.

MYSQL connection error using service name as user host

I have a couple of containers which need to connect to a mysql container. For this example I will use my webapi container. Currently I am able to do this if my "test" user host is set to the IP (172.18.0.4) of the container in the mysql user table.
My understanding is that I can use the service name from the docker-compose file since they will be on the same network, thus not relying on an IP. Although when I change the host of my "test" user to the service name of the container e.g. webapi. I get thrown a mysql error.
SQLSTATE[HY000] [1045] Access denied for user 'test'#'172.18.0.4' (using password: YES)
My docker-compose file is as
version: '2.3'
services:
webapp:
image: webapp:0.3
networks:
- frontend
depends_on:
- database
webapi:
image: webapi:0.2
networks:
- frontend
depends_on:
- database
database:
image: database:0.1
networks:
- frontend
- backend
volumes:
- mysql-data:/var/lib/mysql
networks:
frontend:
external: true
backend:
external: true
volumes:
mysql-data:
external:
name: mysql-data
Is there something I am doing wrong or is my understanding of being able to use the service name as the MYSQL user host incorrect?
The database image is built from the official MariaDB dockerfile.
I found this but forgot to reply back. Interestingly the dockerfile sets the skip-name-resolve which disables DNS host name lookup.
I found this a bit odd as docker pushes the developer into using the container names as the DNS records...
So if you come across this issue, remove the line where it echo's it into the .cnf file near the bottom of the Dockerfile.

AuthZForce Security Level 2: Basic Authorization error "AZF domain not created for application"

We are trying to deploy our security layer (KeyRock, Wilma, AuthZForce) to protect our Orion instance.
We are able to have security level 1 (authentication) with Keyrock and Wilma working, but when we try to insert AuthZForce to check the verb+resource authorization we get the error message:
AZF domain not created for application
In the PEP Proxy User Guide, under "Level 2: Basic Authorization" section, it is stated that we have to configure the roles and permissions for the user in the application. I have created my user and registered my application following the steps on the Fiware IdM User and Programmers Guide. I also created an additional rule to match exactly the resource that I'm trying to GET to guarantee that there is no path mistake.
I am also able to create domains as stated in the AuthZForce - Installation and Administration Guide but I don't know how to bind the Domain ID with user roles when creating them. I've searched in the IdM GUI and in the documentation but I couldn't find how to do it.
So, how can I insert users/organizations/applications under a specific domain, and then have the security level 2?
Update:
My Wima's config.js file has this section:
...
config.azf = {
enabled: true,
host: 'authzforce',
port: 8080,
path: '/authzforce/domains/',
custom_policy: undefined
};
...
And my docker-compose.yml file is:
pepwilma:
image: ging/fiware-pep-proxy
container_name: test_pepwilma
hostname: pepwilma
volumes:
- ./wilma/config.js:/opt/fiware-pep-proxy/config.js
links:
- idm
- authzforce
ports:
- "88:80"
idm:
image: fiware/idm
container_name: test_idm
links:
- authzforce
ports:
- "5000:5000"
- "8000:8000"
authzforce:
image: fiware/authzforce-ce-server
container_name: test_authzforce
hostname: authzforce
ports:
- "8080:8080"
Is the error AZF domain not created reported by KeyRock or Wilma?