Question 1 :
1.1. who is sitting behind the "openshift_master_cluster_public_hostname" hostname ? is it the web console ( web console service ? or web service deployment ) or something else ?
1.2. when doing oc get service -n openshift-web-console i can see that the web console is runnung in 443 , isn't it supposed to work on port 8443 , same thing for api server , shouldn't be working on port 8443 ?
1.3. can you explain to me the flow of a request to https://openshift_master_cluster_public_hostname:8443 ?
1.4. in the documentation is
Question 2:
why i get different response for curl and wget ?
when i : curl https://openshift_master_cluster_public_hostname:8443 , i get :
{
"paths": [
"/api",
"/api/v1",
"/apis",
"/apis/",
"/apis/admissionregistration.k8s.io",
"/apis/admissionregistration.k8s.io/v1beta1",
"/apis/apiextensions.k8s.io",
"/apis/apiextensions.k8s.io/v1beta1",
...
"/swagger.json",
"/swaggerapi",
"/version",
"/version/openshift"
]
}
when i : wget https://openshift_master_cluster_public_hostname:8443 i get an index.html page.
Is the web console answering this request or the
Question 3 :
how can i do to expose the web console on port 443 rather then the 8443 , i found several solution :
using variables "openshift_master_console_port,openshift_master_api_port" but found out that these ports are ‘internal’ ports and not designed to be the public ports. So changing this ports could crash your OpenShift setup
using an external service ( described here )
I'm kind of trying to setup port forwarding on an external haporxy , is it doable ?
Answer to Q1:
1.1. Cite from the documentation Configuring Your Inventory File
This variable overrides the public host name for the cluster,
which defaults to the host name of the master. If you use an
external load balancer, specify the address of the external load balancer.
For example:
> openshift_master_cluster_public_hostname=openshift-ansible.public.example.com
This means that this Variable is the Public facing interface to the OpenShift Web-Console.
1.2 A Service is a virtual Object which connects the Service Name to the pods and is used to connect the Route Object with the Service Object. This is explained in the documentation Services. You can use almost every port for a Service because it's virtual and nothing will bind on this Port.
1.3. The answer depend on your setup. I explain it in a ha-setup with a TCP loadbalancer in front of the masters.
/> Master API 1
client -> loadbalancer -> Master API 2
\> Master API 3
The Client make a request to https://openshift_master_cluster_public_hostname:8443 the loadbalancer forwards the Client to the Master API 1 or 2 or 3 and the Client get the answer from the requested Master API Server.
api server redirect to console if request come from a browser ( https://github.com/openshift/origin/blob/release-3.11/pkg/cmd/openshift-kube-apiserver/openshiftkubeapiserver/patch_handlerchain.go#L60-L61 )
Answer to Q2:
curl and wget behaves different because they are different tools but the https request is the same.
curl behavior with wget
wget --output-document=- https://openshift_master_cluster_public_hostname:8443
wget behavior with curl
curl -o index.html https://openshift_master_cluster_public_hostname:8443
Why - is described in Usage of dash (-) in place of a filename
Answer to Q3:
You can use the OpenShift Router which you use for the apps to make the Web-Console available on 443. It's a little bit outdated but the concept is the same for the current 3.x versions Make OpenShift console available on port 443 (https) [UPDATE]
I'm using PouchDB 7.0.0 in an Ionic project (Ionic 4.0.5).
Within a provider, I define both a local and a remote database:
#Injectable()
export class DatabaseProvider {
constructor() {
this.db = new PouchDB("mydb");
this.remote = new PouchDB("http://<my_server_running_couchdb>/<remote_db_name>")
}
The local database lives in the Chrome browser as an IndexedDB instance. However, the problem also occurs in Firefox so it does not look like the browser is the guy to blame.
The remote database is initially empty and runs on CouchDB 2.1.2. It has already been created on my server with no admin or member set up, so it should be public and allow non-authenticated requests. By the way, CORS are enabled as well.
In the same provider I also define a method that triggers a replication from the local db to the remote node:
replicateLocalDBToRemote() {
console.log("Replicating database...");
this.db.replicate.to(this.remote).then(() => {
console.log("Celebrate");
}).catch(error => {
console.error(error)
})
}
And here is what the call to replicateLocalDBToRemote throws at me
CustomPouchError {__zone_symbol__currentTask: e, result: {…}}
result:
doc_write_failures: 0
docs_read: 0
docs_written: 0
end_time: "2018-11-21T16:23:36.974Z"
errors: []
last_seq: 0
ok: false
start_time: "2018-11-21T16:23:36.874Z"
status: "aborting"
and I am afraid I can't call this a self-explanatory message.
Any guess on what might be the root cause of the issue?
EDIT: After crawling through the PouchDB repo on github, I found this entry which might refer to the same problem.
I fixed the problem by allowing traffic through port 5984 on my remote CouchDB server.
The thing is, sending requests on port 80 (i.e. GET http://<my_server>.com/mydb) does send back some data so I never bothered to try with port 5984 in the first place because I thought the API was also implemented on port 80...
So at least my issue had nothing to do with PouchDB but I wish the error message was a bit more specific.
I am using postman to test an API I have, all is good when the request does not contain sub-domain, however when I add a sub-domain to URL I am getting this response.
Could not get any response
There was an error connecting to http://subdomain.localhost:port/api/
Why this might have happened:
The server couldn't send a response:Ensure that the backend is working
properly
Self-signed SSL certificates are being blocked:Fix this by turning off
'SSL certificate verification' in Settings > General
Proxy configured incorrectly Ensure that proxy is configured correctly
in Settings > Proxy
Request timeout:Change request timeout in Settings > General
If I copy the same URL from postman and paste it into the browser I get a proper response, is there some kind of configurations I should do to make postman work with sub-domains?
First Go to Settings in Postman:
Off the SSL certificate verification in General Tab:
Off the Global Proxy Configuration and Use System Proxy in Proxy Tab:
Make Request Timeout to 0 (Zero)
Configure Apache:
If the above changes resulted in a 404 response, then continue reading ;-)
Users that host their site locally (like with XAMP and/or WAMP), may be able to visit their virtual sites using https:// prefixed address, but it's a lie, and to really enable SSL (for each virtual-site), configure Apache like:
Open httpd-vhosts.conf file (from Apache's conf/extras directory), in your preferred text editor.
Change the virtual site's settings, into something like:
<VirtualHost *:80 *:443>
ServerName my-site.local
ServerAlias *.my-site.local
DocumentRoot "C:\xampp\htdocs\my-project\public"
SSLEngine on
SSLCertificateFile "path/to/my-generated.cert"
SSLCertificateKeyFile "path/to/my-generated.key"
SetEnv APPLICATION_ENV "development"
<Directory "C:\xampp\htdocs\my-project\public">
Options Indexes FollowSymLinks
AllowOverride All
Order allow, deny
Allow from all
</Directory>
</VirtualHost>
But of course, generate a dummy-SSL-certificate, and change all file paths, like from "path/to/my-generated.cert" into real file addresses.
Finally, test by visiting the local site in the browser, but using http:// (without S) prefixed address; Apache should now give error like:
Bad Request
Your browser sent a request that this server could not understand.
Reason: You're speaking plain HTTP to an SSL-enabled server port.
Instead use the HTTPS scheme to access this URL, please.
I had the same issue. It was caused by a newline at the end of the "Authorization" header's value, which I had set manually by copy-pasting the bearer token (which accidentally contained the newline at its end)
If you get a "Could not get any response" message from Postman native apps while sending your request, open Postman Console (View > Show Postman Console), resend the request and check for any error logs in the console.
Thanks to numaanashraf
Hi This issue is resolved for me.
setting ->general -> Requesttimeout in ms = 0
If all above methods doesn't work check your environment variables, And make sure that the following environments are not set. If those are set and not needed by any other application remove them.
HTTP_PROXY
HTTPS_PROXY
Reference link
For me it was the http://localhost instead of https://localhost.
When getting the following error,
you need to do the following.
Step 1:
In Postman, click the wrench icon, go to settings, then go to the Proxy tab.
Step 2:
Create a custom Proxy. This article explains how to create a custom proxy.
After you create the custom Proxy, make sure you turn the Proxy toggle button to off. I put 61095 in for the proxy server and it worked for me.
Step 3 :
Success
I came up with this solution
In postman go to setting --> proxy
And off Global Proxy Configuration
on the Use System Proxy
And go to windows host configure file
'C:\Windows\System32\drivers\etc\hosts'
Open that file in administrator mode
And add the sub domain to hosts file
For me what worked was to add 127.0.0.1 subdomain.localhost to my host file. On OSX that was /etc/hosts. Not sure why that was necessary as I could reach the subdomain from chrome.
In postman go to setting --> proxy
And off Global Proxy Configuration
For me, it was that route that I was calling in my node server wasn't returning anything. Adding
return res.status(200).json({
message: 'success!',
response: 'success!'
});//
to the route I was calling resolved the issue.
You mentioned you are using a CER certificate.
According to the Postman page on certificates.
Choose your client certificate file in the CRT file field. Currently, we only support the CRT format. Support for other formats (like PFX) will come soon.
The name of the extension CER, CRT doesn't make the certificate that type of certificate but, these are the excepted extensions names.
CER is an X.509 certificate in binary form, DER encoded.
CRT is a binary X.509 certificate, encapsulated in text (base-64) encoding.
You can use OpenSSL to change a CER file into a CRT file. I have not had good luck with it but it looks like this.
openssl x509 -inform PEM -in certificate.cer -out certificate.crt
or
openssl x509 -inform DER -in certificate.cer -out certificate.crt
Postman for Linux Version 6.7.1 - Ubuntu 18.04 - linux 4.15.0-43-generic / x64
I had the same problem and by chance I replaced http://localhost with http://127.0.0.1 and everything worked.
My etc/hosts had the proper entries for localhost and https://localhost requests always worked as expected.
I have no clue why changing localhost for http with 127.0.0.1 solved the issue.
None of these solutions works for me. Postman is not sending any request to the server because postman is not finding the host. So, if you modify your /etc/hosts to
127.0.0.1 localhost
127.0.0.1 subdomain.localhost
It works for me.
For me the issue was that the Content-Length was too big. I placed the content of the body in NotePad++ and counted the characters and put that figure in PostMan and then it worked.
I know it does not directly answer why the op's sub-domain was not working but it might help out someone.
In my case it was invisible spaces that postman didn't recognize, the above string of text renders as without spaces in postman.
I disabled SSL certificate Validation and System Proxy even tried on postman chrome extension(which is about to be deprecated), but when I downloaded and tried Insomnia and it gave those red dots in the place where those spaces were, must have gotten there during copy/paste
For anyone who experienced this issue with real domain instead of localhost and couldn't solve it using ANY OF THE ABOVE solutions.
Try changing your Network DNS (WIFI or LAN) to some other DNS. For me, I used Google DNS 8.8.8.8, 8.8.4.4 and it worked!
solution is very simple if you are using asp.net core 2 application . Inside ConfigureServices method inside startup.cs file all this line
services.AddMvc()
.SetCompatibilityVersion(CompatibilityVersion.Version_2_1)
.AddJsonOptions(x => x.SerializerSettings.ReferenceLoopHandling = Newtonsoft.Json.ReferenceLoopHandling.Ignore);
You just need to turn SSL off to send your request.
Proxy and others come with various errors.
My issue was by putting wrong parameters in the header,
the requested parameters was
Authorization: Token <string>
and is was trying
Authorization Token: <string>
After all the above methods like turning OFF SSL certificate verification, turning ON only Use System Proxy and removing HTTP_PROXY and HTTPS_PROXY system environment variables, it worked.
Note: Had to restart the Postman app, since the environment variables were changed.
Unchecking proxy and SSL Certificate Verification didn't work for me.
Unsetting PROXY environment variables did the trick.
export http_proxy=
export ftp_proxy=
export https_proxy=
Change to the directory where Postman is installed and then:
./Postman
In my case, MVC wasn't able to serialize the results (I accidentally used a model instead of DTO). I debugged down to passing a simple string, which worked. Once I fixed the serialization it all came up.
In my case the (corporate) proxy was using a self-signed SSL certificate which Postman disliked. I discovered it by activating
View->Show Postman console
and retrying the request. The console then showed the certificate error. In
Settings->General
I disabled
SSL certificate verification.
The solution for me, as I'm using the deprecated Postman extension for Chrome, to solve this issue I had to:
Call some GET request using the Chrome Browser itself.
Wait for the error page "Your connection is not private" to appear.
Click on ADVANCED and then proceed to [url] (unsafe) link.
After this, requests through the extension itself should work.
In my case it was a misconfigured subnet. Only one of the 2 subnets in the ELB worked.
I figured this out by doing a nslookup and trying to curl the returned IPs directly. Only one worked.
Postman just kept using the misconfigured one.
I had the same issue.
Turned out my timeout was set too low. I changed it to 30ms thinking it was 30sec. I set it back to 0 and it started working again.
I got the same "Could not get any response" issue because of wrong parameter in header. I fixed it by removing parameter HOST out of header.
PS: Unfortunately, I was pushed to install the other software to get this information. It should be great to get this error message from Postman instead of getting general nonsense.
In my case, I forgot to set the value of the variable in the "CURRENT VALUE" field.
I just experienced this error. In my case, the path was TOO LONG. So url like that gave me this error in postman (fake example)
http://127.0.0.1:5000/api/batch/upload_import_deactivate_from_ready_folder
whereas
http://127.0.0.1:5000/api/batch/upld_impt_deac_ready_folder
worked fine.
Hope it helps someone who by accident read that far...
I'm trying to protect Orion Context Broker using KeyRock idm, Wilma PEP-Proxy and AuthZForce PDP over Docker. For now, level 1 security works well and I can deny access to non logged users, but I get this error on Wilma when trying to add level 2.
AZF domain not created for application <applicationID>
Here it is my azf configuration in Wilma's config.js file:
config.azf = {
enabled: true,
protocol: 'http',
host: 'azfcontainer',
port: 8080,
custom_policy: undefined
};
And this is how I set the access control configuration on KeyRock:
# ACCESS CONTROL GE
ACCESS_CONTROL_URL = 'http://azfcontainer:8080'
ACCESS_CONTROL_MAGIC_KEY = None
I have created the custom policies on Keyrock, but AuthZForce logs don't show any request from KeyRock or Wilma, so no domain is created on the PDP. I have checked that all containers can see and reach each other and that all ports are up. I may be missing some configuration.
These are the versions I'm using:
keyrock=5.4.1
wilma=5.4
autzforce=6.0.0/5.4.1
This question is the same that “AZF domain not created for application” AuthZforce, but my problem persists even with the shown AuthZForce GE Configuration.
I found the cause of this problem that is present when the AuthZForce is not behind a PEP Proxy and therefore the variable ACCESS_CONTROL_MAGIC_KEY is not modified (None by default).
It seems horizon reads both ACCESS_CONTROL_URL and ACCESS_CONTROL_MAGIC_KEY parameters in openstack_dashboard/local/local_settings.py when it needs to connect to AuthZForce. Theoretically, the second parameter is optional (it introduces a 'X-Auth-Token' header for the PEP Proxy), but if horizon detects it is None (the default value in local_settings.py) or an empty string, the log shows a Warning and returns inmediatly from the function "policyset_update" in openstack_dashboard/fiware_api/access_control_ge.py. So the communication to AuthZForce never takes place.
The easier way to solve the problem is to write some text as magic key in: openstack_dashboard/local/local_settings.py:
# ACCESS CONTROL GE
ACCESS_CONTROL_URL = 'http://authzforce_url:port'
ACCESS_CONTROL_MAGIC_KEY = '1234567890' # DO NOT LEAVE None OR EMPTY
Thus, a 'X-Auth-Token' header will be generated, but it shouldn't affect to the communication when the AuthZForce isn't behind a PEP Proxy (the header is simply ignored).
Notice: Remember to delete the cached bytecode file "openstack_dashboard/local/local_settings.pyc" when making changes to assure the new config is updated after restart horizon service.
PS: I sent a pull request to https://github.com/ging/horizon with a simple modification that fixes the problem.