Is it possible to open a port for hazelcast on openshift? No matter what port I try, I get the same exception:
SocketException: Permission denied
I am not trying to open the port to the world. I just want to open a port so the gears can use Hazelcast. It seems like this should be possible.
You're probably have to use a HTTP tunnel to connect Hazelcast, not a nice solution but I prototyped it some time ago: https://github.com/noctarius/https-tunnel-openshift-hazelcast
Anyhow gears should be openshift V2, isn't it? Never tried it with V2, if you get the chance, there's support for V3 (and V3.1) - http://blog.hazelcast.com/openshift/
What cartridge type do you use?
You can bind to any port from 15000 to 35530 internally, but other gears won't be able to access it.
From my experience - I had to open the public proxy port for other members of the cluster to join.
For example, Vert.x cartridge uses Hazelcast for clustering and has some additional public proxy ports open (see https://github.com/vert-x/openshift-cartridge/blob/master/metadata/manifest.yml).
Endpoints:
- Private-IP-Name: IP
Private-Port-Name: PORT
Private-Port: 8080
Public-Port-Name: PROXY_PORT
Mappings:
- Frontend: ""
Backend: ""
Options: { "websocket": 1}
- Private-IP-Name: IP
Private-Port-Name: HAZELCAST_PORT
Private-Port: 5701
Public-Port-Name: HAZELCAST_PROXY_PORT
- Private-IP-Name: IP
Private-Port-Name: CLUSTER_PORT
Private-Port: 9123
Public-Port-Name: CLUSTER_PROXY_PORT
(see https://access.redhat.com/documentation/en-US/OpenShift_Online/2.0/html/Cartridge_Specification_Guide/chap-Exposing_Services.html).
On OpenShift, you should only bind websockets to either port 8000 or 8443.
See:
https://developers.openshift.com/en/managing-port-binding-routing.html
https://blog.openshift.com/paas-websockets/
Related
Question 1 :
1.1. who is sitting behind the "openshift_master_cluster_public_hostname" hostname ? is it the web console ( web console service ? or web service deployment ) or something else ?
1.2. when doing oc get service -n openshift-web-console i can see that the web console is runnung in 443 , isn't it supposed to work on port 8443 , same thing for api server , shouldn't be working on port 8443 ?
1.3. can you explain to me the flow of a request to https://openshift_master_cluster_public_hostname:8443 ?
1.4. in the documentation is
Question 2:
why i get different response for curl and wget ?
when i : curl https://openshift_master_cluster_public_hostname:8443 , i get :
{
"paths": [
"/api",
"/api/v1",
"/apis",
"/apis/",
"/apis/admissionregistration.k8s.io",
"/apis/admissionregistration.k8s.io/v1beta1",
"/apis/apiextensions.k8s.io",
"/apis/apiextensions.k8s.io/v1beta1",
...
"/swagger.json",
"/swaggerapi",
"/version",
"/version/openshift"
]
}
when i : wget https://openshift_master_cluster_public_hostname:8443 i get an index.html page.
Is the web console answering this request or the
Question 3 :
how can i do to expose the web console on port 443 rather then the 8443 , i found several solution :
using variables "openshift_master_console_port,openshift_master_api_port" but found out that these ports are ‘internal’ ports and not designed to be the public ports. So changing this ports could crash your OpenShift setup
using an external service ( described here )
I'm kind of trying to setup port forwarding on an external haporxy , is it doable ?
Answer to Q1:
1.1. Cite from the documentation Configuring Your Inventory File
This variable overrides the public host name for the cluster,
which defaults to the host name of the master. If you use an
external load balancer, specify the address of the external load balancer.
For example:
> openshift_master_cluster_public_hostname=openshift-ansible.public.example.com
This means that this Variable is the Public facing interface to the OpenShift Web-Console.
1.2 A Service is a virtual Object which connects the Service Name to the pods and is used to connect the Route Object with the Service Object. This is explained in the documentation Services. You can use almost every port for a Service because it's virtual and nothing will bind on this Port.
1.3. The answer depend on your setup. I explain it in a ha-setup with a TCP loadbalancer in front of the masters.
/> Master API 1
client -> loadbalancer -> Master API 2
\> Master API 3
The Client make a request to https://openshift_master_cluster_public_hostname:8443 the loadbalancer forwards the Client to the Master API 1 or 2 or 3 and the Client get the answer from the requested Master API Server.
api server redirect to console if request come from a browser ( https://github.com/openshift/origin/blob/release-3.11/pkg/cmd/openshift-kube-apiserver/openshiftkubeapiserver/patch_handlerchain.go#L60-L61 )
Answer to Q2:
curl and wget behaves different because they are different tools but the https request is the same.
curl behavior with wget
wget --output-document=- https://openshift_master_cluster_public_hostname:8443
wget behavior with curl
curl -o index.html https://openshift_master_cluster_public_hostname:8443
Why - is described in Usage of dash (-) in place of a filename
Answer to Q3:
You can use the OpenShift Router which you use for the apps to make the Web-Console available on 443. It's a little bit outdated but the concept is the same for the current 3.x versions Make OpenShift console available on port 443 (https) [UPDATE]
1. What I've tried
I want to make ocp cluster (actually, single node-all in one) like this blog
link : openshift.com/blog/revamped-openshift-all-in-one-aio-for-labs-and-fun
and I also referred to official document : Installing bare metal
So, What I have tried is like this :
(I used VirtualBox to make four vm)
- 1 bastion
- 1 dns
- 1 master
- 1 bootstrap
These vm are in the same network.
First, I made ignition file to boot master and bootstrap node.
install-config.yaml that I used :
apiVersion: v1
baseDomain: hololy-local.com
compute:
- hyperthreading: Enabled
name: worker
replicas: 0
controlPlane:
hyperthreading: Enabled
name: master
replicas: 1
metadata:
name: test
networking:
clusterNetwork:
- cidr: 10.128.0.0/14
hostPrefix: 23
networkType: OpenShiftSDN
serviceNetwork:
- 172.30.0.0/16
platform:
none: {}
fips: false
pullSecret: '{"auths": ...}'
sshKey: 'ssh-ed25519 AAAA...'
I only changed baseDomain, master's number of replica, pullSecret and sshKey.
After Making ignition files, I started to boot bootstrap node and master node with iso file.
bootstrap node was successfully installed, but problem happens master node.
2. Details
Before starting Master node installation, I have to set up dns. Because unlike bootstrap's installation, Master node requests domain info during installation.
ip address
dns : 192.168.56.114
master : 192.168.56.150
DNS Zone is like this :
And I started to set up master node using this parameters.
coreos.inst.install_dev=sda
coreos.inst.image_url=http://192.168.56.114/rhcos438.x86_64.raw.gz
coreos.inst.ignition_url=http://192.168.56.114/master.ign
ip=192.168.56.150::192.168.56.254:255.255.255.0:core0.hololy-local.com:enp0s3:none nameserver=192.168.56.114
Installation finished successfully, but when it boots without boot disk(.iso) Error comes out.
It seems to trying to find master configuration file in api-int.aio.hololy-local.com:22623, and It connects ip address that I wrote in the zone file.
But strangely, The connection refused continuously.
Since I set the static ip when rhcos installation, so Ping test works successfully to 192.168.56.150.
I think 22623 port was blocked. But How can I open the port before OS boot?...
I don't know how to I solve it.
Thanks.
I solved it.
The differences between installation of 3.11 and 4.x is whether LB's necessary.
In 4.x LB is necessary. so you should set up LB.
In my situation, I set LB by nginx, and the sample is like this:
stream{
upstream ocp_k8s_api {
#round-robin;
server 192.168.56.201:6443; #bootstrap
server 192.168.56.202:6443; #master1
server 192.168.56.203:6443; #master2
server 192.168.56.204:6443; #master3
}
server {
listen 6443;
proxy_pass ocp_k8s_api;
}
upstream ocp_m_config {
#round-robin;
server 192.168.56.201:22623; #bootstrap
server 192.168.56.202:22623; #master1
server 192.168.56.203:22623; #master2
server 192.168.56.204:22623; #master3
}
server {
listen 22623;
proxy_pass ocp_m_config;
}
upstream ocp_http {
#round-robin;
server 192.168.56.205:80; #worker1
server 192.168.56.206:80; #worker2
}
server{
listen 80;
proxy_pass ocp_http;
}
upstream ocp_https {
#round-robin;
server 192.168.56.205:443; #worker1
server 192.168.56.206:443; #worker2
}
server{
listen 443;
proxy_pass ocp_https;
}
}
thanks.
In ejabberd 18.01-2, installed in lxc container Ubuntu 18.04 Bionic LTS using apt, I'm trying to setup mod_http_upload.
In the section listen, I have
listen:
-
port: 5444
module: ejabberd_http
tls: true
request_handlers:
"/upload": mod_http_upload
In the configuration file, commented port was 5444, however, in the current documentation, it is 5443, so I am not sure which one is right.
In the modules section, I have
modules:
mod_http_upload:
host: "upload.ejabberd.forumanalogue.fr"
max_size: infinity
thumbnail: true
put_url: "https://ejabberd.forumanalogue.fr:5444/upload"
docroot: "/ejabberd/upload"
When I start the service, I can see an odd message in the logs
2019-11-11 21:02:35.287 [warning] <0.367.0>#ejabberd_pkix:handle_call:255 No certificate found matching 'upload.ejabberd.forumanalogue.fr': strictly configured clients or servers will reject connections with this host; obtain a certificate for this (sub)domain from any trusted CA such as Let's Encrypt (www.letsencrypt.org)
It is strange because I have a signed wildcard certificate.
certfiles:
- "/etc/letsencrypt/live/forumanalogue.fr/*.pem"
I can see the service with my client (Gajim) but when I try to send a file to another local account, I receive an error Access denied by service policy, see the complete log:
<iq xml:lang='en' to='foo#forumanalogue.fr/gajim.HCLJ4BZI' from='upload.ejabberd.forumanalogue.fr' type='error' id='1dd35274-90e9-4b3b-9608-0fab59afe34e'>
<request xmlns='urn:xmpp:http:upload'>
<filename>a.out</filename>
<size>27232</size>
<content-type>application/octet-stream</content-type>
</request>
<error code='403' type='auth'>
<forbidden xmlns='urn:ietf:params:xml:ns:xmpp-stanzas'/>
<text xml:lang='en' xmlns='urn:ietf:params:xml:ns:xmpp-stanzas'>Access denied by service policy</text>
</error>
</iq>
I had to enable debug logging in order to see something. It is quite verbose, but I think that the relevant part, which is non redundant with the client message, is
2019-11-11 20:53:08.329 [debug] <0.501.0>#mod_http_upload:process_slot_request:544 Denying HTTP upload slot request from foo#forumanalogue.fr/gajim.HCLJ4BZI
Thank you for your help.
I tried with ejabberd 18.01, a configuration similar to yours, and it works for me.
Looking at the source code, that "process_slot_request:544 " error means that the account attempting to use the upload feature is not allowed by the "local" Access rule in the vhost it sended it to. Probably it's a remote account. Remote to that upload service. In other words, the service upload.whatever can only be used by accounts like user12#whatever.
In your case, you are attempting to use upload.ejabberd.forumanalogue.fr from account foo#forumanalogue.fr, which is not local to that upload service.
Several ideas, I hope one of them suits your specific setup:
A) don't mess with vhosts. If it's forumanalogue.fr, keep it that everywhere
B) use #HOST# in host and put_url options
C) Or if you really want to mess with hosts, then add Access rights so accounts in that vhost are considered "local" to the upload service.
I have an application running in Openshift Online starter, which worked for the last 5 months. A single pod behind a service with a route defined that does edge tls termination.
Since Saturday, when trying to access the application, I get the error message
Application is not available
The application is currently not serving requests at this endpoint. It may not have been started or is still starting.
Possible reasons you are seeing this page:
The host doesn't exist. Make sure the hostname was typed correctly and that a route matching this hostname exists.
The host exists, but doesn't have a matching path. Check if the URL path was typed correctly and that the route was created using the desired path.
Route and path matches, but all pods are down. Make sure that the resources exposed by this route (pods, services, deployment configs, etc) have at least one pod running.
The pod is running, I can exec into it and check this, I can port-forward to it and access it.
checking the different components with oc:
$ oc get po -o wide
NAME READY STATUS RESTARTS AGE IP NODE
taboo3-23-jt8l8 1/1 Running 0 1h 10.128.37.90 ip-172-31-30-113.ca-central-1.compute.internal
$ oc get svc
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
taboo3 172.30.238.44 <none> 8080/TCP 151d
$ oc describe svc taboo3
Name: taboo3
Namespace: sothawo
Labels: app=taboo3
Annotations: openshift.io/generated-by=OpenShiftWebConsole
Selector: deploymentconfig=taboo3
Type: ClusterIP
IP: 172.30.238.44
Port: 8080-tcp 8080/TCP
Endpoints: 10.128.37.90:8080
Session Affinity: None
Events: <none>
$ oc get route
NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD
taboo3 taboo3-sothawo.193b.starter-ca-central-1.openshiftapps.com taboo3 8080-tcp edge/Redirect None
I tried to add a new route as well (with or without tls), but am getting the same error.
Does anybody have an idea what might be causing this and how to fix it?
Addition April 17, 2018: Got an email from Openshift Online support:
It looks like you may be affected by this bug.
So waiting for it to be resolved.
The problem has been resolved by Openshift Online, the application is working again
We are setting up a test cloud Openshift Origin which we created using the openshift ansible playbook. We are following the documentation at: https://docs.openshift.com/container-platform/latest/install_config/install/advanced_install.html
We have not done anything special concerning the openshift registry or router.
We are pretty new to this topic and we tried since few tags to bring the openshift registry accessible....
We have 3 hosts:
master (unschedulable)
node-1 which is set to the region 'infra' and has the registry and router services
node-2 (other region).
Here the services running on the default project:
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
docker-registry 172.30.78.66 <none> 5000/TCP 3h
kubernetes 172.30.0.1 <none> 443/TCP,53/UDP,53/TCP 3h
registry-console 172.30.190.63 <none> 9000/TCP 3h
router 172.30.197.135 <none> 80/TCP,443/TCP,1936/TCP 3h
When we SSH directly on the node-1 where the registry and router are running, we can access the registry without problem and we can push some images. Exactly what is here described: docs.openshift.org/latest/install_config/registry/accessing_registry.html
Now we cannot access the registry for other hosts (master or node-2) and we really do not understand how we can make the registry accessible.... We have of course read: docs.openshift.org/latest/install_config/registry/securing_and_exposing_registry.html#access-insecure-registry-by-exposing-route
We have used this command:
oc expose service docker-registry --hostname=<hostname> -n default
The documentation says: You must be able to resolve this name externally via DNS to the router’s IP address.
As the router does not have any EXTERNAL-IP address attached to it, we do not understand how to reach it.
Is there any oc or oadm command for exposing the router through an external-ip address?
Thanks a lot in advance
Emmanuel
Based on your stated configuration I would expect the path to your UI/API for Openshift (openshift.yourdomain.com) to be routed to the same IP as your node-1, because that is where you are running the router.
If that is the case then you would point the hostname you are passing via the command in DNS to the same IP, or as a CNAME to that host.
oc expose service docker-registry --hostname=<hostname> -n default
In a larger setup with dedicated set of load balancer (lb) nodes you might have a specific A record for the set. You could then have the hostname be a CNAME to that record.