Domain not found: AZF domain not created for application - fiware

I got this error while trying to configure level 2 authentication using idm,pep-proxy and pdp.
I am using latest version of authzforce,idm,pep-proxy but this error still persists.
config.azf = {
enabled: true,
protocol: 'http',
host: 'localhost',
port: 8080,
custom_policy: undefined // use undefined to default policy checks (HTTP verb + path).
};
part of config that is relevant.
As I understand idm connected with authzforce should auto create domains, but for some reason that is not case.
I have tried with different versions, read similar issues on stack but problem still persist.Any advice or maybe point what i am doing wrong would be really helpful.
Thanks

Related

Quarkus reactive datasource SSL handshake failure

I am facing the same problem described in (Error on Quarkus reactive datasource SSL handshake). The problem seems solved, but I didn't manage to make it work. I tried providing the trust-certificate-pem property but I still get - Ssl handshake failed.
My yml config looks something like:
quarkus:
datasource:
reactive:
url: postgresql://<host>:5432/<database>
postgresql:
ssl-mode: verify_ca
trust-certificate-pem:
enabled: true
certs: /path/client-cert.pem,/path/server-ca.pem
key-certificate-pem:
enabled: true
keys: /path/client-key.pem
certs: /path/client-cert.pem
Am I missing something? I would really appreciate the help.

Error TypeOrmModule Unable to connect to database with "ETIMEDOUT" or "Handshake inactivity timeout"

I have a NestJS (v8.2.x) server application which I'm attempting to connect to an AWS Arura 3.x (MySQL 8.x protocol) using TypeORM (v0.2.41) and either the mysql (v2.18.1) or mysql2 (v2.3.3) driver. The application is running in a GitHub Codespace.
When following the NestJS TypeORM documentation I'm getting the following errors:
With mysql2 driver I'm getting:
ERROR [TypeOrmModule] Unable to connect to the database. Retrying (1)...
Error: connect ETIMEDOUT
...
With mysql driver I'm getting:
[TypeOrmModule] Error: Handshake inactivity timeout
...
The code creating the connection looks as follows:
import { Module } from '#nestjs/common';
import { TypeOrmModule } from '#nestjs/typeorm';
import { AppController } from './app.controller';
import { AppService } from './app.service';
const MYSQL_HOST = '....rds.amazonaws.com';
const MYSQL_USERNAME = '...';
const MYSQL_PASSWORD = '...';
#Module({
imports: [
TypeOrmModule.forRoot({
type: 'mysql',
host: MYSQL_HOST,
port: 3306,
username: MYSQL_USERNAME,
password: MYSQL_PASSWORD,
database: 'kitchen',
// entities: [__dirname + '/**/*.entity{.ts,.js}'],
debug: true,
logging: true,
}),
],
controllers: [AppController],
providers: [AppService],
})
export class AppModule {}
Initial Troubleshooting
First, I validated the credentials I'm utilizing in the server application. I affirmed they worked correctly to connect via TablePlus. Thus, I ruled out "invalid credentials" and determined I had another issue.
Secondly, when creating the AWS Arura database I'd selected Yes to Public Access:
Amazon EC2 instances and devices outside the VPC can connect to your database. Choose one or more VPC security groups that specify which EC2 instances and devices inside the VPC can connect to the database.
Fix
TL;DR: Although, I'd selected Yes to Public Access I had to further relax the "inbound" security rules it seems. Thus, adding another "inbound rule" with source: "0.0.0.0/0" resolved my issue.
Debug
Why? Maybe because the default rule of source: "76.202.164.21/32" doesn't work because of where the GitHub Codespace is hosted? No idea...
How did I find this?
Initially, I was using the mysql2 package and getting it's error (listed above) with no StackOverflow results. As mysql2 is a "drop in replacement" for the basic mysql package I decided to revert to mysql to see if it had a different error. As listed above, I received a slightly different error which lead me to StackOverflow question Error: Handshake inactivity timeout in Node.js MYSQL module. Where there are AWS specific answers:
a) mscheker's add an inbound rule
For those deploying on AWS and experiencing this error, you'll need to make a change to the security group of your database/cluster and add an inbound rule where the source is the security group of your instance/s.
b) Berkay Torun's "changing the allowed IP Addresses"
If you are using Amazon's services, I was able to resolve this by changing the allowed IP Addresses in the security settings or by changing the open connections ports.
are what I followed to resolve the issue. Adding an extra inbound rule of "all IPv4 address" are allowed via source: "0.0.0.0/0".
In my case I had to add the entity in forFeature to the module

Custom domain not pointing to Heroku project

I have deployed my node js project on Heroku but I am not able to point my domain (purchased from ionos.ca) to the Heroku dns target. I have made two domains in heroku dashboard:
*.mysite.com, DNS Target: aqueous-jay-p8wmra8eyzlv3gzckdhj99je.herokudns.com
www.mysite.com, DNS Target:
experimental-turnip-ha25x6iwdwmb4xzxtsdrhj3k.herokudns.com
Then in my ionos.ca domain portal, I changed the CNAME to
aqueous-jay-p8wmra8eyzlv3gzckdhj99je.herokudns.com
But whenever I visit www.mysite.com I get an error saying
This site can’t provide a secure connection
www.mysite.com sent an
invalid response.
Visiting mysite.com gives me this error:
This site can’t be reached
mysite.com’s server IP address could not be found.
Any idea how I could fix this? I have been trying to make it work since last 1 hour :(
Something is wrong with your SSL/TLS setup. Fiddler4/Wireshark is showing Internal Error (80) I found some references that may help here: https://stackoverflow.com/questions/43436412/openssl-connection-alert-internal-error If you are using NGINX then post your config I can help with that.
Frame 138: 61 bytes on wire (488 bits), 61 bytes captured (488 bits) on interface 0
Ethernet II, Src: Fortinet_d4:fd:97 (70:4c:a5:d4:fd:97), Dst: Dell_b3:a3:f6 (b8:85:84:b3:a3:f6)
Internet Protocol Version 4, Src: 52.73.16.193, Dst: 192.168.1.40
Transmission Control Protocol, Src Port: 443, Dst Port: 63037, Seq: 1, Ack: 221, Len: 7
Transport Layer Security
TLSv1.2 Record Layer: Alert (Level: Fatal, Description: Internal Error)
Content Type: Alert (21)
Version: TLS 1.2 (0x0303)
Length: 2
Alert Message
Level: Fatal (2)
Description: Internal Error (80)

Nodemailer works on local, but not works without displayunlockcaptcha on Netlify

I have an email verification feature through nodemailer in node server.
It works on localhost, but not works on Netlify once it's deployed.
Here are my codes.
const transporter = nodemailer.createTransport({
host: "smtp.gmail.com",
port: 465,
secure: true,
auth: {
user: "mygmail#gmail.com",
pass: "mypassword",
},
});
On live server, it occurs an error that -
Error: Invalid login: 534-5.7.14 534-5.7.14 Please log in via your web browser and then try again. 534-5.7.14 Learn more at 534 5.7.14 https://support.google.com/mail/answer/78754 195sm513587qkd.6 - gsmtp
I enabled "Less secure apps" in my google account.
And allowed https://accounts.google.com/b/0/displayunlockcaptcha as well
it worked a while, but since I cleared browser histories, it didn't work again.
So I allowed displayunlockcaptch again, it worked.
It means I should allow displayunlockcaptcha every times.
Is there any way to keep allowing it? Or any other way?

Fiware AuthZForce error: "AZF domain not created for application"

I'm trying to protect Orion Context Broker using KeyRock idm, Wilma PEP-Proxy and AuthZForce PDP over Docker. For now, level 1 security works well and I can deny access to non logged users, but I get this error on Wilma when trying to add level 2.
AZF domain not created for application <applicationID>
Here it is my azf configuration in Wilma's config.js file:
config.azf = {
enabled: true,
protocol: 'http',
host: 'azfcontainer',
port: 8080,
custom_policy: undefined
};
And this is how I set the access control configuration on KeyRock:
# ACCESS CONTROL GE
ACCESS_CONTROL_URL = 'http://azfcontainer:8080'
ACCESS_CONTROL_MAGIC_KEY = None
I have created the custom policies on Keyrock, but AuthZForce logs don't show any request from KeyRock or Wilma, so no domain is created on the PDP. I have checked that all containers can see and reach each other and that all ports are up. I may be missing some configuration.
These are the versions I'm using:
keyrock=5.4.1
wilma=5.4
autzforce=6.0.0/5.4.1
This question is the same that “AZF domain not created for application” AuthZforce, but my problem persists even with the shown AuthZForce GE Configuration.
I found the cause of this problem that is present when the AuthZForce is not behind a PEP Proxy and therefore the variable ACCESS_CONTROL_MAGIC_KEY is not modified (None by default).
It seems horizon reads both ACCESS_CONTROL_URL and ACCESS_CONTROL_MAGIC_KEY parameters in openstack_dashboard/local/local_settings.py when it needs to connect to AuthZForce. Theoretically, the second parameter is optional (it introduces a 'X-Auth-Token' header for the PEP Proxy), but if horizon detects it is None (the default value in local_settings.py) or an empty string, the log shows a Warning and returns inmediatly from the function "policyset_update" in openstack_dashboard/fiware_api/access_control_ge.py. So the communication to AuthZForce never takes place.
The easier way to solve the problem is to write some text as magic key in: openstack_dashboard/local/local_settings.py:
# ACCESS CONTROL GE
ACCESS_CONTROL_URL = 'http://authzforce_url:port'
ACCESS_CONTROL_MAGIC_KEY = '1234567890' # DO NOT LEAVE None OR EMPTY
Thus, a 'X-Auth-Token' header will be generated, but it shouldn't affect to the communication when the AuthZForce isn't behind a PEP Proxy (the header is simply ignored).
Notice: Remember to delete the cached bytecode file "openstack_dashboard/local/local_settings.pyc" when making changes to assure the new config is updated after restart horizon service.
PS: I sent a pull request to https://github.com/ging/horizon with a simple modification that fixes the problem.