I try to run a aspnetcore application on RaspBerryPi with windows 10 iot.
Fiddling with the firewall I found out, that it is blocking.
Rule Name: Network Discovery (SSDP-In)
----------------------------------------------------------------------
Enabled: Yes
Direction: In
Profiles: Domain,Private,Public
Grouping: Network Discovery
LocalIP: Any
RemoteIP: LocalSubnet
Protocol: UDP
LocalPort: 1900
RemotePort: Any
Edge traversal: No
Action: Allow
Anyhow - the firewall blocks the UDP packages.
So I added my own rule:
Rule Name: MySSDP
----------------------------------------------------------------------
Enabled: Yes
Direction: In
Profiles: Domain,Private,Public
Grouping:
LocalIP: Any
RemoteIP: LocalSubnet
Protocol: UDP
LocalPort: 1900
RemotePort: Any
Edge traversal: No
Action: Allow
Now discovery works - but I'm curious why I had to define my own rule.
It looks exactly like the predefined one - except that it has no grouping.
Am I missing something - or is (although it states enable) the SSDP-In rule not active?
If so - how can I check / change the status.
Related
I'm using Horizontal Pod Autoscaler to scale my pods in an OpenShift environment. I have a web application running in pods. As the pod scales, I got an HTTP status code 404 error in the first few seconds of an HTTP request. Is this because routes is sending a request to a pod that is in the process of being launched? If so, is there any way to prevent the error? I've tried setting router.openshift.io/haproxy.health.check.interval to a small value, but I still can't avoid this error.
It seems you did not configure your readiness checks correctly. Check the documentation on how to add readiness and liveness checks to your Deployment.
A readiness probe determines if a container is ready to accept service requests.
A liveness probe determines if a container is still running.
In newer versions of OpenShift / Kubernetes there is now also the startupProbe, which may help you in your case.
Here is an example of a Deployment with a liveness and a readiness probe:
kind: Deployment
apiVersion: apps/v1
...
spec:
...
template:
spec:
containers:
- name: example
readinessProbe:
tcpSocket:
port: 8080
livenessProbe:
tcpSocket:
port: 8080
...
It is really getting hard to understand and debug the rules for ingress. Can anyone share a good reference?
The question is how the ingress works without specifying the host?
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
nginx.ingress.kubernetes.io/force-ssl-redirect: \"false\"
name: my-app
spec:
rules:
http:
paths:
- backend:
path: /
serviceName: my-app
servicePort: http
Upon assigning a host (e.g.- host: aws-dsn-name.org) it doesn't work.
Upon changing the path to path: /v1/ it also doesn't work :( .
How can I debug/check whether the mapping is correctly done?
Additionally, when to use extensions/v1beta1 or networking.k8s.io/v1beta1
There is pretty good documentation available here for getting started. It may not cover all aspects but it does answer your questions. Ingress controller is basically a reverse proxy and follows similar ideas.
The snippet you have shared is called single backend or single service ingress. / Path would be default. It's the only entry so every request on the exposed port will be served by the tied service.
Host entry; host: aws-dns-name.org should work as long as your DNS is resolving aws-dns-name.org to the IP of a node in the cluster or the LB fronting the cluster. Do a ping to that DNS entry and see if it's resolving to the target IP correctly. Try curl -H 'Host: aws-dns-name.org' IP_Address to verify if ingress responding correctly. NGINX is using Host header to decide which backend service to use. If you are sending traffic to IP with a different Host entry, it will not connect to the right service and will serve default-backend.
If you are doing path based routing, which can be combined with host based routing as well, NGINX will route to the correct backend service based on the intercepted path. However, just like any other reverse proxy, it will send the request to the specified path (http://service:80/v1/). Your application may not be listening on /v1/ path so you will end up with a 404. Use the rewrite-target annotation to let NGINX know that you serving at /.
API resources versions do switch around in K8s and can be hard to keep up with. The correct annotation now is networking.k8s.io/v1beta1 (networking.k8s.io/v1 starting 1.19) even though the old version is working but eventually will stop working. I have seen cluster upgrades break applications because somebody forgot to update the API version.
I have an application running in Openshift Online starter, which worked for the last 5 months. A single pod behind a service with a route defined that does edge tls termination.
Since Saturday, when trying to access the application, I get the error message
Application is not available
The application is currently not serving requests at this endpoint. It may not have been started or is still starting.
Possible reasons you are seeing this page:
The host doesn't exist. Make sure the hostname was typed correctly and that a route matching this hostname exists.
The host exists, but doesn't have a matching path. Check if the URL path was typed correctly and that the route was created using the desired path.
Route and path matches, but all pods are down. Make sure that the resources exposed by this route (pods, services, deployment configs, etc) have at least one pod running.
The pod is running, I can exec into it and check this, I can port-forward to it and access it.
checking the different components with oc:
$ oc get po -o wide
NAME READY STATUS RESTARTS AGE IP NODE
taboo3-23-jt8l8 1/1 Running 0 1h 10.128.37.90 ip-172-31-30-113.ca-central-1.compute.internal
$ oc get svc
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
taboo3 172.30.238.44 <none> 8080/TCP 151d
$ oc describe svc taboo3
Name: taboo3
Namespace: sothawo
Labels: app=taboo3
Annotations: openshift.io/generated-by=OpenShiftWebConsole
Selector: deploymentconfig=taboo3
Type: ClusterIP
IP: 172.30.238.44
Port: 8080-tcp 8080/TCP
Endpoints: 10.128.37.90:8080
Session Affinity: None
Events: <none>
$ oc get route
NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD
taboo3 taboo3-sothawo.193b.starter-ca-central-1.openshiftapps.com taboo3 8080-tcp edge/Redirect None
I tried to add a new route as well (with or without tls), but am getting the same error.
Does anybody have an idea what might be causing this and how to fix it?
Addition April 17, 2018: Got an email from Openshift Online support:
It looks like you may be affected by this bug.
So waiting for it to be resolved.
The problem has been resolved by Openshift Online, the application is working again
I'm trying to protect Orion Context Broker using KeyRock idm, Wilma PEP-Proxy and AuthZForce PDP over Docker. For now, level 1 security works well and I can deny access to non logged users, but I get this error on Wilma when trying to add level 2.
AZF domain not created for application <applicationID>
Here it is my azf configuration in Wilma's config.js file:
config.azf = {
enabled: true,
protocol: 'http',
host: 'azfcontainer',
port: 8080,
custom_policy: undefined
};
And this is how I set the access control configuration on KeyRock:
# ACCESS CONTROL GE
ACCESS_CONTROL_URL = 'http://azfcontainer:8080'
ACCESS_CONTROL_MAGIC_KEY = None
I have created the custom policies on Keyrock, but AuthZForce logs don't show any request from KeyRock or Wilma, so no domain is created on the PDP. I have checked that all containers can see and reach each other and that all ports are up. I may be missing some configuration.
These are the versions I'm using:
keyrock=5.4.1
wilma=5.4
autzforce=6.0.0/5.4.1
This question is the same that “AZF domain not created for application” AuthZforce, but my problem persists even with the shown AuthZForce GE Configuration.
I found the cause of this problem that is present when the AuthZForce is not behind a PEP Proxy and therefore the variable ACCESS_CONTROL_MAGIC_KEY is not modified (None by default).
It seems horizon reads both ACCESS_CONTROL_URL and ACCESS_CONTROL_MAGIC_KEY parameters in openstack_dashboard/local/local_settings.py when it needs to connect to AuthZForce. Theoretically, the second parameter is optional (it introduces a 'X-Auth-Token' header for the PEP Proxy), but if horizon detects it is None (the default value in local_settings.py) or an empty string, the log shows a Warning and returns inmediatly from the function "policyset_update" in openstack_dashboard/fiware_api/access_control_ge.py. So the communication to AuthZForce never takes place.
The easier way to solve the problem is to write some text as magic key in: openstack_dashboard/local/local_settings.py:
# ACCESS CONTROL GE
ACCESS_CONTROL_URL = 'http://authzforce_url:port'
ACCESS_CONTROL_MAGIC_KEY = '1234567890' # DO NOT LEAVE None OR EMPTY
Thus, a 'X-Auth-Token' header will be generated, but it shouldn't affect to the communication when the AuthZForce isn't behind a PEP Proxy (the header is simply ignored).
Notice: Remember to delete the cached bytecode file "openstack_dashboard/local/local_settings.pyc" when making changes to assure the new config is updated after restart horizon service.
PS: I sent a pull request to https://github.com/ging/horizon with a simple modification that fixes the problem.
I'm running on the Google Container Engine platform have an ingress that I would like to have a default backend service for almost all of my domains (there are quite a few, but have another, specific service for one domain on it. Going by my understanding of the ingress user guide (scan for "Default Backends:" in there), the config below should work correctly.
However, it doesn't ever create the second backend. Running kubectl describe ingress on the ingress made and when looking at the LB in the Google console site, only the first "default" backend service is listed. Changing the default one into a rule one fixes the problem but means I have to explicitly list all of the domains I want to support.
So, I'm assuming I have a bug in the config below. If so, what is it?
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: boringsites
spec:
backend:
serviceName: boringsites
servicePort: 80
tls:
- secretName: boringsites-tls
rules:
- host: subdomain.example.com
http:
paths:
- backend:
serviceName: other-svc
servicePort: 80
I just created https://gist.github.com/bprashanth/9f4533b19fd864b723ba0720a3648fa3#file-default-basic-yaml-L94 on kubernetes 1.3 and it works as expected. Perhaps you can debug backwards? Where are you running kube and what version are you using? There is a known and fixed race in 1.2 that you might be running into, especially if you updated the ingress. Also note that you need services of type=nodeport, or the ingress controller on gce will ignore the service you plugged into the reasource.